Jan 26 22:39:43 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 22:39:43 crc restorecon[4679]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:43 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 22:39:44 crc restorecon[4679]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 22:39:45 crc kubenswrapper[4793]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.483613 4793 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490558 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490591 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490603 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490612 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490622 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490632 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490643 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490654 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490664 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490674 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490683 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490692 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490700 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490709 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490717 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490725 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490733 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490741 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490749 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490757 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490765 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490773 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490788 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490796 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490804 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490812 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490819 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490827 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490835 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490843 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490851 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490859 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490867 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490876 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490885 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490894 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490903 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490911 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490919 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490927 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490935 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490942 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490950 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490957 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490968 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490978 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490987 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.490996 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491004 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491012 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491020 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491032 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491041 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491050 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491058 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491067 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491076 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491084 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491092 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491100 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491108 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491117 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491125 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491132 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491140 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491147 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491155 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491165 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491173 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491180 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.491213 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491386 4793 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491404 4793 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491429 4793 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491442 4793 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491454 4793 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491464 4793 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491489 4793 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491502 4793 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491511 4793 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491520 4793 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491530 4793 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491540 4793 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491549 4793 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491559 4793 flags.go:64] FLAG: --cgroup-root="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491568 4793 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491578 4793 flags.go:64] FLAG: --client-ca-file="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491589 4793 flags.go:64] FLAG: --cloud-config="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491600 4793 flags.go:64] FLAG: --cloud-provider="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491609 4793 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491635 4793 flags.go:64] FLAG: --cluster-domain="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491644 4793 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491654 4793 flags.go:64] FLAG: --config-dir="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491663 4793 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491673 4793 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491685 4793 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491694 4793 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491703 4793 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491713 4793 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491722 4793 flags.go:64] FLAG: --contention-profiling="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491730 4793 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491751 4793 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491760 4793 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491769 4793 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491781 4793 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491790 4793 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491800 4793 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491809 4793 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491818 4793 flags.go:64] FLAG: --enable-server="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491826 4793 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491845 4793 flags.go:64] FLAG: --event-burst="100" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491854 4793 flags.go:64] FLAG: --event-qps="50" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491864 4793 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491884 4793 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491894 4793 flags.go:64] FLAG: --eviction-hard="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491905 4793 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491914 4793 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491923 4793 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491932 4793 flags.go:64] FLAG: --eviction-soft="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491943 4793 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491953 4793 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491961 4793 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491970 4793 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491979 4793 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491988 4793 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.491997 4793 flags.go:64] FLAG: --feature-gates="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492008 4793 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492017 4793 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492026 4793 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492035 4793 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492044 4793 flags.go:64] FLAG: --healthz-port="10248" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492053 4793 flags.go:64] FLAG: --help="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492062 4793 flags.go:64] FLAG: --hostname-override="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492072 4793 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492081 4793 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492090 4793 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492099 4793 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492108 4793 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492117 4793 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492126 4793 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492136 4793 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492145 4793 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492154 4793 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492163 4793 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492172 4793 flags.go:64] FLAG: --kube-reserved="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492181 4793 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492216 4793 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492225 4793 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492234 4793 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492255 4793 flags.go:64] FLAG: --lock-file="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492265 4793 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492274 4793 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492283 4793 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492296 4793 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492305 4793 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492314 4793 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492322 4793 flags.go:64] FLAG: --logging-format="text" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492331 4793 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492341 4793 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492350 4793 flags.go:64] FLAG: --manifest-url="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492359 4793 flags.go:64] FLAG: --manifest-url-header="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492372 4793 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492382 4793 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492393 4793 flags.go:64] FLAG: --max-pods="110" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492403 4793 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492412 4793 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492421 4793 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492430 4793 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492439 4793 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492448 4793 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492457 4793 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492483 4793 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492492 4793 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492501 4793 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492510 4793 flags.go:64] FLAG: --pod-cidr="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492518 4793 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492533 4793 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492542 4793 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492551 4793 flags.go:64] FLAG: --pods-per-core="0" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492559 4793 flags.go:64] FLAG: --port="10250" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492568 4793 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492577 4793 flags.go:64] FLAG: --provider-id="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492586 4793 flags.go:64] FLAG: --qos-reserved="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492595 4793 flags.go:64] FLAG: --read-only-port="10255" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492604 4793 flags.go:64] FLAG: --register-node="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492631 4793 flags.go:64] FLAG: --register-schedulable="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492640 4793 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492656 4793 flags.go:64] FLAG: --registry-burst="10" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492665 4793 flags.go:64] FLAG: --registry-qps="5" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492674 4793 flags.go:64] FLAG: --reserved-cpus="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492682 4793 flags.go:64] FLAG: --reserved-memory="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492694 4793 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492703 4793 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492712 4793 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492721 4793 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492729 4793 flags.go:64] FLAG: --runonce="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492739 4793 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492748 4793 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492757 4793 flags.go:64] FLAG: --seccomp-default="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492765 4793 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492774 4793 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492783 4793 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492792 4793 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492802 4793 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492811 4793 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492820 4793 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492829 4793 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492838 4793 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492847 4793 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492856 4793 flags.go:64] FLAG: --system-cgroups="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492865 4793 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492878 4793 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492887 4793 flags.go:64] FLAG: --tls-cert-file="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492895 4793 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492913 4793 flags.go:64] FLAG: --tls-min-version="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492922 4793 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492940 4793 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492949 4793 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492958 4793 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492967 4793 flags.go:64] FLAG: --v="2" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.492979 4793 flags.go:64] FLAG: --version="false" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.493002 4793 flags.go:64] FLAG: --vmodule="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.493014 4793 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.493023 4793 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493280 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493291 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493299 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493306 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493315 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493322 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493331 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493338 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493346 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493354 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493382 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493395 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493405 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493413 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493422 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493429 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493437 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493447 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493456 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493466 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493475 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493483 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493492 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493501 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493514 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493523 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493531 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493539 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493546 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493554 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493563 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493570 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493578 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493587 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493595 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493603 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493612 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493619 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493627 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493639 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493646 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493654 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493663 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493670 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493678 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493685 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493693 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493701 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493709 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493717 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493725 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493733 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493741 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493749 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493757 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493765 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493779 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493789 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493799 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493808 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493817 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493827 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493835 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493844 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493852 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493861 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493870 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493879 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493887 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493896 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.493904 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.493919 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.506586 4793 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.506624 4793 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506752 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506764 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506773 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506785 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506796 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506806 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506815 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506823 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506831 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506839 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506847 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506856 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506863 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506871 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506879 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506888 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506896 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506905 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506912 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506921 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506928 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506937 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506948 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506959 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506969 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506979 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506988 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.506997 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507006 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507017 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507028 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507037 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507047 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507055 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507065 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507075 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507085 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507094 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507103 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507112 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507120 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507128 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507136 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507146 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507154 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507162 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507171 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507180 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507212 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507221 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507229 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507237 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507246 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507254 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507262 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507270 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507278 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507290 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507298 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507306 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507315 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507323 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507331 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507339 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507347 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507356 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507364 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507371 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507379 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507388 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507396 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.507409 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507697 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507711 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507722 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507732 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507741 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507751 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507761 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507769 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507777 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507786 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507794 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507802 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507810 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507818 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507826 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507834 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507842 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507850 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507859 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507867 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507875 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507884 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507893 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507900 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507910 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507918 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507926 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507934 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507942 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507949 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507957 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507965 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507973 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507982 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507989 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.507998 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508006 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508014 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508022 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508030 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508038 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508045 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508053 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508062 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508072 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508082 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508091 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508100 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508109 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508118 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508125 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508134 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508142 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508149 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508158 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508165 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508173 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508181 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508214 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508222 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508231 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508238 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508246 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508254 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508262 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508270 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508277 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508285 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508293 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508305 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.508314 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.508326 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.509478 4793 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.515406 4793 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.515545 4793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.517319 4793 server.go:997] "Starting client certificate rotation" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.517425 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.517675 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-11 20:41:28.784883191 +0000 UTC Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.517814 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.541594 4793 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.545718 4793 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.547175 4793 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.567994 4793 log.go:25] "Validated CRI v1 runtime API" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.610660 4793 log.go:25] "Validated CRI v1 image API" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.613363 4793 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.618341 4793 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-22-35-46-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.618389 4793 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.646776 4793 manager.go:217] Machine: {Timestamp:2026-01-26 22:39:45.643179607 +0000 UTC m=+0.631951159 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:601eed9e-4791-49d9-902a-c6f8f21a8d0a BootID:4aaaf2a3-8422-4886-9dc8-d9142aad48d5 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:73:9f:f6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:73:9f:f6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:9c:c5:b0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:dd:72:c2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1c:e7:5d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:80:be:23 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d6:af:ad:b9:f9:3a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:7e:0c:bc:0b:dd Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.647677 4793 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.647951 4793 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.648477 4793 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.648766 4793 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.648824 4793 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.649358 4793 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.649383 4793 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.649835 4793 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.649871 4793 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.651005 4793 state_mem.go:36] "Initialized new in-memory state store" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.651677 4793 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.656561 4793 kubelet.go:418] "Attempting to sync node with API server" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.656600 4793 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.656718 4793 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.656748 4793 kubelet.go:324] "Adding apiserver pod source" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.656804 4793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.664366 4793 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.665491 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.665481 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.665618 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.665629 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.665686 4793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.670347 4793 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672137 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672212 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672229 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672247 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672270 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672285 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672299 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672323 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672343 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672358 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672379 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.672748 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.674132 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.674990 4793 server.go:1280] "Started kubelet" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.675822 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:45 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.677228 4793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.677231 4793 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.684758 4793 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.685444 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.685523 4793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.686233 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:05:55.002875021 +0000 UTC Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.686487 4793 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.687258 4793 server.go:460] "Adding debug handlers to kubelet server" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.687555 4793 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.687586 4793 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.687548 4793 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.689158 4793 factory.go:55] Registering systemd factory Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.689345 4793 factory.go:221] Registration of the systemd container factory successfully Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.692951 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.693233 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.693996 4793 factory.go:153] Registering CRI-O factory Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.694051 4793 factory.go:221] Registration of the crio container factory successfully Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.694225 4793 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.694273 4793 factory.go:103] Registering Raw factory Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.694302 4793 manager.go:1196] Started watching for new ooms in manager Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.695144 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="200ms" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.694337 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e6907c3dd6655 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 22:39:45.674937941 +0000 UTC m=+0.663709493,LastTimestamp:2026-01-26 22:39:45.674937941 +0000 UTC m=+0.663709493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.696172 4793 manager.go:319] Starting recovery of all containers Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704127 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704211 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704239 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704260 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704281 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704302 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704321 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704341 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704365 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704384 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704406 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704426 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704445 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704469 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704489 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704510 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704528 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704547 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704567 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704586 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704605 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704627 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704649 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704673 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704725 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704745 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704769 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704791 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704814 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704835 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704855 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704874 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704894 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704913 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704964 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.704986 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705006 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705025 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705044 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705097 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705117 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705136 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705159 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705179 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705220 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705241 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705261 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705282 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705303 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705322 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705344 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705362 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.705391 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.707509 4793 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.707678 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.707805 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.707971 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708094 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708237 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708375 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708495 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708624 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708739 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708861 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.708974 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709132 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709319 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709442 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709567 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709690 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709803 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.709913 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710035 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710151 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710300 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710415 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710530 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710703 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710821 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.710937 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711049 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711173 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711344 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711468 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711583 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711704 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711829 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.711940 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712061 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712174 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712400 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712516 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712630 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712764 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712882 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.712999 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.713449 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.713946 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.714168 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.714394 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.714577 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.714763 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.714932 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.715129 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.715514 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.715711 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.715885 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.716080 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.716296 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.716497 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.716668 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.716844 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.717008 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.717279 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.717498 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.717709 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.717873 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718051 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718251 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718445 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718618 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718775 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.718933 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.719098 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.719288 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.719480 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.719681 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.719875 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720035 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720163 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720342 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720473 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720636 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720778 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.720915 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721121 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721299 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721431 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721552 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721666 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721782 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.721941 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722078 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722306 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722435 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722557 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722845 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.722993 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.723123 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728243 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728329 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728473 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728536 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728564 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728598 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728622 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728649 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728836 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728863 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728892 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728913 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728941 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728962 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.728983 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729009 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729055 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729083 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729103 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729124 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729151 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729175 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729413 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729443 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729469 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729518 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729540 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729569 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729636 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729659 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729748 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729799 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729824 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729838 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729852 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729893 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729919 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729935 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729950 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729963 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729980 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.729992 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730009 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730021 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730032 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730049 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730062 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730076 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730090 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730100 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730117 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730130 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730141 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730155 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730166 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730181 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730208 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730239 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730255 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730267 4793 reconstruct.go:97] "Volume reconstruction finished" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.730277 4793 reconciler.go:26] "Reconciler: start to sync state" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.734395 4793 manager.go:324] Recovery completed Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.744407 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.747898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.747957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.747979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.751648 4793 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.751664 4793 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.751683 4793 state_mem.go:36] "Initialized new in-memory state store" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.755537 4793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.759480 4793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.759581 4793 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.759665 4793 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.759827 4793 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 22:39:45 crc kubenswrapper[4793]: W0126 22:39:45.760919 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.761026 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.769777 4793 policy_none.go:49] "None policy: Start" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.771394 4793 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.771454 4793 state_mem.go:35] "Initializing new in-memory state store" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.787327 4793 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.844765 4793 manager.go:334] "Starting Device Plugin manager" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.844838 4793 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.844864 4793 server.go:79] "Starting device plugin registration server" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.845623 4793 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.845658 4793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.845997 4793 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.846147 4793 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.846238 4793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.860121 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.860327 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.860888 4793 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.861947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.862018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.862033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.862376 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.862810 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.862899 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.863995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864055 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864315 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864596 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.864971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865785 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.865837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.866008 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.866071 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867414 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.867502 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.868025 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.868806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.868881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.868904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.869311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.869363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.869387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.869473 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.869523 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.870671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.870790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.870892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.895913 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="400ms" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934450 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934514 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934576 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934633 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934662 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934782 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934925 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.934967 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.935008 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.935048 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.935090 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.935122 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.945894 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.949051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.949096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.949112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:45 crc kubenswrapper[4793]: I0126 22:39:45.949150 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:45 crc kubenswrapper[4793]: E0126 22:39:45.950083 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036641 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036830 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036991 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036978 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037072 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037003 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037143 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.036856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037177 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037184 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037312 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037329 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037371 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037366 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037490 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037526 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037523 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037597 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037672 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037733 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037809 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037882 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037921 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.037996 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.150519 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.152609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.152695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.152722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.152775 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:46 crc kubenswrapper[4793]: E0126 22:39:46.153403 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.202345 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.217901 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.242153 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.256644 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.264447 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.265233 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0cde3547e11958469d79b54aaf3ff10882bd2dc85c8cb7e4c0cbb2b3bbdb5216 WatchSource:0}: Error finding container 0cde3547e11958469d79b54aaf3ff10882bd2dc85c8cb7e4c0cbb2b3bbdb5216: Status 404 returned error can't find the container with id 0cde3547e11958469d79b54aaf3ff10882bd2dc85c8cb7e4c0cbb2b3bbdb5216 Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.268885 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-87b51d62adcd6976bb8572556cb4f4789c17f635bd7ff1ef635a431be2b93581 WatchSource:0}: Error finding container 87b51d62adcd6976bb8572556cb4f4789c17f635bd7ff1ef635a431be2b93581: Status 404 returned error can't find the container with id 87b51d62adcd6976bb8572556cb4f4789c17f635bd7ff1ef635a431be2b93581 Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.279035 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-65e6692cd526ef5352d60ee76112852af4af6f1053da8a890b3bfaf8b0334d0c WatchSource:0}: Error finding container 65e6692cd526ef5352d60ee76112852af4af6f1053da8a890b3bfaf8b0334d0c: Status 404 returned error can't find the container with id 65e6692cd526ef5352d60ee76112852af4af6f1053da8a890b3bfaf8b0334d0c Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.285632 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e9840f8a3a30728a3c07d7ee52004fa459a426fe36845e2a7a3dbb5940ca56a8 WatchSource:0}: Error finding container e9840f8a3a30728a3c07d7ee52004fa459a426fe36845e2a7a3dbb5940ca56a8: Status 404 returned error can't find the container with id e9840f8a3a30728a3c07d7ee52004fa459a426fe36845e2a7a3dbb5940ca56a8 Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.288677 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-da896677d96a1f5c3b7105d04e84e8465e5192dd899f6267154b3fd2c2a1547a WatchSource:0}: Error finding container da896677d96a1f5c3b7105d04e84e8465e5192dd899f6267154b3fd2c2a1547a: Status 404 returned error can't find the container with id da896677d96a1f5c3b7105d04e84e8465e5192dd899f6267154b3fd2c2a1547a Jan 26 22:39:46 crc kubenswrapper[4793]: E0126 22:39:46.297936 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="800ms" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.553952 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.556665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.556733 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.556753 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.556800 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:46 crc kubenswrapper[4793]: E0126 22:39:46.557531 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.677408 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.686383 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:54:00.932888546 +0000 UTC Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.767264 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0cde3547e11958469d79b54aaf3ff10882bd2dc85c8cb7e4c0cbb2b3bbdb5216"} Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.769862 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"da896677d96a1f5c3b7105d04e84e8465e5192dd899f6267154b3fd2c2a1547a"} Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.772091 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e9840f8a3a30728a3c07d7ee52004fa459a426fe36845e2a7a3dbb5940ca56a8"} Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.773705 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"65e6692cd526ef5352d60ee76112852af4af6f1053da8a890b3bfaf8b0334d0c"} Jan 26 22:39:46 crc kubenswrapper[4793]: I0126 22:39:46.774958 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"87b51d62adcd6976bb8572556cb4f4789c17f635bd7ff1ef635a431be2b93581"} Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.896855 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:46 crc kubenswrapper[4793]: E0126 22:39:46.896992 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:46 crc kubenswrapper[4793]: W0126 22:39:46.915222 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:46 crc kubenswrapper[4793]: E0126 22:39:46.915305 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:47 crc kubenswrapper[4793]: W0126 22:39:47.076712 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:47 crc kubenswrapper[4793]: E0126 22:39:47.076828 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:47 crc kubenswrapper[4793]: E0126 22:39:47.099403 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="1.6s" Jan 26 22:39:47 crc kubenswrapper[4793]: W0126 22:39:47.140073 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:47 crc kubenswrapper[4793]: E0126 22:39:47.140235 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.357710 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.359243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.359313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.359336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.359377 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:47 crc kubenswrapper[4793]: E0126 22:39:47.359976 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.676685 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.686807 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 14:28:32.675922172 +0000 UTC Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.739006 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 22:39:47 crc kubenswrapper[4793]: E0126 22:39:47.740508 4793 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.781673 4793 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6" exitCode=0 Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.781780 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.781777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6"} Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784399 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ae8e9b05a172bfe620de76d3f4da816441ab8c2a36ae76a1fde1c0702d90abd8" exitCode=0 Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784489 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ae8e9b05a172bfe620de76d3f4da816441ab8c2a36ae76a1fde1c0702d90abd8"} Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784611 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.784729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.785998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.786032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.786045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.787088 4793 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018" exitCode=0 Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.787163 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018"} Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.787243 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.788378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.788425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.788443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.789584 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3"} Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.792277 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb" exitCode=0 Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.792336 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb"} Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.792409 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.793788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.793855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.793883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.797820 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.798652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.798681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:47 crc kubenswrapper[4793]: I0126 22:39:47.798694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.676982 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.687388 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:22:32.774076427 +0000 UTC Jan 26 22:39:48 crc kubenswrapper[4793]: W0126 22:39:48.698859 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:48 crc kubenswrapper[4793]: E0126 22:39:48.699034 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:48 crc kubenswrapper[4793]: E0126 22:39:48.701124 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="3.2s" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.799533 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.799711 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.800727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.800761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.800773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.802919 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="524898d7681667e5eaaf2849d4b40c29cdfe17ea917d6324e27f5373d0415aa3" exitCode=0 Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.802980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"524898d7681667e5eaaf2849d4b40c29cdfe17ea917d6324e27f5373d0415aa3"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.803085 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.803773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.803799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.803811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.810012 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.810059 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.810080 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.810314 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.811959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.812023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.812042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.815531 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.815573 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.815609 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.815636 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.817007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.817138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.817227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.820062 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.820111 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.820128 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.820144 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955"} Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.960865 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.964159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.964226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.964239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:48 crc kubenswrapper[4793]: I0126 22:39:48.964282 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:48 crc kubenswrapper[4793]: E0126 22:39:48.964934 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 26 22:39:49 crc kubenswrapper[4793]: W0126 22:39:49.351320 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:49 crc kubenswrapper[4793]: E0126 22:39:49.351469 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.382108 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.677624 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:49 crc kubenswrapper[4793]: W0126 22:39:49.685577 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:49 crc kubenswrapper[4793]: E0126 22:39:49.685687 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.688399 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:34:25.857161041 +0000 UTC Jan 26 22:39:49 crc kubenswrapper[4793]: W0126 22:39:49.688796 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 26 22:39:49 crc kubenswrapper[4793]: E0126 22:39:49.688946 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.829267 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7"} Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.829401 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.835629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.835754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.835784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.837855 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="da0cb395873753c08aeba8f2e31e4d4e6c7d0325eaa9f75e0e29941c8b8cc14e" exitCode=0 Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.838074 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.838994 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.839672 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"da0cb395873753c08aeba8f2e31e4d4e6c7d0325eaa9f75e0e29941c8b8cc14e"} Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.839742 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.839995 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.840747 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.842053 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.842106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.842124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.843148 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.843229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.843250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.844116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.844161 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.844182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.845292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.845342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:49 crc kubenswrapper[4793]: I0126 22:39:49.845360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.523286 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.689396 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:36:59.630674377 +0000 UTC Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847437 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847454 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847419 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"541574ffed2413b310e56114b191a3f1eb3ba027205d150337a6cf3cd9b900c5"} Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847595 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"110889ffccbcfde6eab709244c74d186a223f0e8d79f15d118d9efabecf77f91"} Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847634 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b48e7f71fb3c312a54fefa5a1fe3e661ac2b12fb841496d30e63b35c9c0c5e91"} Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847659 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.847438 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:50 crc kubenswrapper[4793]: I0126 22:39:50.849458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.152231 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.690463 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:44:21.181014808 +0000 UTC Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.856251 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7818fd042574496a56a2a1d84d74e15bb85c3b0196001fc49bf89087fa6778ec"} Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.856333 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.856404 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.856331 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8940542dcbafb8e5d9ade7eadf58c6d2fb344a81af7995b9a4ae7cbd4b20bf0f"} Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.857553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.857597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.857609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.858410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.858459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.858479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:51 crc kubenswrapper[4793]: I0126 22:39:51.945254 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.024334 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.024542 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.062604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.062684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.062708 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.165844 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.167918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.167988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.168008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.168060 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.690667 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:56:05.125569026 +0000 UTC Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.860037 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.860145 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.865908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.865981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.866009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.866051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.866094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:52 crc kubenswrapper[4793]: I0126 22:39:52.866115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.635342 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.635696 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.637746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.637795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.637813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:53 crc kubenswrapper[4793]: I0126 22:39:53.691289 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:16:16.09618776 +0000 UTC Jan 26 22:39:54 crc kubenswrapper[4793]: I0126 22:39:54.692779 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:08:34.429870291 +0000 UTC Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.142025 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.142363 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.144306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.144382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.144403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.653703 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.653978 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.656114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.656182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.656248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.662840 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.693819 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:37:49.476799072 +0000 UTC Jan 26 22:39:55 crc kubenswrapper[4793]: E0126 22:39:55.861077 4793 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.870551 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.871890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.871964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:55 crc kubenswrapper[4793]: I0126 22:39:55.871985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:56 crc kubenswrapper[4793]: I0126 22:39:56.636121 4793 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 22:39:56 crc kubenswrapper[4793]: I0126 22:39:56.636348 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 22:39:56 crc kubenswrapper[4793]: I0126 22:39:56.694808 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:23:39.232197862 +0000 UTC Jan 26 22:39:57 crc kubenswrapper[4793]: I0126 22:39:57.695725 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:57:39.222791067 +0000 UTC Jan 26 22:39:58 crc kubenswrapper[4793]: I0126 22:39:58.696513 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:11:45.493709772 +0000 UTC Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.387105 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.387300 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.388447 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.388489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.388503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.696902 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:31:03.761270811 +0000 UTC Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.896248 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60310->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 22:39:59 crc kubenswrapper[4793]: I0126 22:39:59.896352 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60310->192.168.126.11:17697: read: connection reset by peer" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.201707 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.201820 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.207943 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.208031 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.457077 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.457378 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.458694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.458730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.458744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.504094 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.533236 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]log ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]etcd ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/priority-and-fairness-filter ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-apiextensions-informers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-apiextensions-controllers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/crd-informer-synced ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-system-namespaces-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 26 22:40:00 crc kubenswrapper[4793]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/bootstrap-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/start-kube-aggregator-informers ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-registration-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-discovery-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]autoregister-completion ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-openapi-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 26 22:40:00 crc kubenswrapper[4793]: livez check failed Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.535032 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.697594 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:42:16.54731371 +0000 UTC Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.886677 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.888980 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7" exitCode=255 Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.889125 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.889128 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7"} Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.889454 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.890118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.890217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.890237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.890926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.891106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.891266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.892538 4793 scope.go:117] "RemoveContainer" containerID="2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7" Jan 26 22:40:00 crc kubenswrapper[4793]: I0126 22:40:00.904619 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.698340 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:09:18.196009219 +0000 UTC Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.895224 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.897454 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d"} Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.897583 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.897716 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.898812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.898874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.898892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.899248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.899295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:01 crc kubenswrapper[4793]: I0126 22:40:01.899306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:02 crc kubenswrapper[4793]: I0126 22:40:02.698641 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:52:04.14825634 +0000 UTC Jan 26 22:40:03 crc kubenswrapper[4793]: I0126 22:40:03.699826 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:39:31.737246139 +0000 UTC Jan 26 22:40:04 crc kubenswrapper[4793]: I0126 22:40:04.700174 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:22:49.188016649 +0000 UTC Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.189413 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.192914 4793 trace.go:236] Trace[390218279]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 22:39:54.935) (total time: 10257ms): Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[390218279]: ---"Objects listed" error: 10257ms (22:40:05.192) Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[390218279]: [10.257607251s] [10.257607251s] END Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.192945 4793 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.193008 4793 trace.go:236] Trace[314768884]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 22:39:52.732) (total time: 12459ms): Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[314768884]: ---"Objects listed" error: 12459ms (22:40:05.192) Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[314768884]: [12.459970214s] [12.459970214s] END Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.193041 4793 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.195369 4793 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.195414 4793 trace.go:236] Trace[1326921677]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 22:39:53.659) (total time: 11536ms): Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[1326921677]: ---"Objects listed" error: 11535ms (22:40:05.195) Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[1326921677]: [11.536007909s] [11.536007909s] END Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.195445 4793 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.196747 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.199090 4793 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.222146 4793 trace.go:236] Trace[966903809]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 22:39:53.834) (total time: 11387ms): Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[966903809]: ---"Objects listed" error: 11386ms (22:40:05.220) Jan 26 22:40:05 crc kubenswrapper[4793]: Trace[966903809]: [11.387525821s] [11.387525821s] END Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.222249 4793 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.383126 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.388419 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.529343 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.529680 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.535175 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.671266 4793 apiserver.go:52] "Watching apiserver" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.677791 4793 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.678874 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.679309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.679338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.679383 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.679795 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.679797 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.679851 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.680179 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.680527 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.680957 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.681345 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.681450 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.681572 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.683303 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.683406 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.683636 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.683797 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.683833 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.684245 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.688589 4793 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.698148 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.698401 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.698584 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.698743 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.698890 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699050 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699221 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699387 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699545 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699389 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699555 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699698 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699733 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.699863 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700010 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700108 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700135 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700118 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700163 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700201 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700230 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700253 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700272 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700297 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700348 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700378 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700397 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700416 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700437 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700456 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700491 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700510 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700548 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700567 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700585 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700584 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700606 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700632 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700653 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700677 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700696 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700716 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700736 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700758 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700780 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700800 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700866 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700886 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700905 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700940 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700961 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.700979 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701014 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701034 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701055 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701075 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701092 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701114 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701132 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701174 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701231 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701251 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701269 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701289 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701327 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701346 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701367 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701385 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701406 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701425 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701444 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701463 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701483 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701503 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701520 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701591 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701612 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701632 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701652 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701672 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701707 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701724 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701743 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701760 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701778 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701798 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701816 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701834 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701850 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701867 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701883 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701898 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701917 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701936 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701952 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701968 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701986 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708817 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708872 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708904 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708940 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708968 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708996 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709020 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709077 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709112 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709142 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709168 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709210 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709243 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709270 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709297 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709322 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709350 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709378 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709406 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709431 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709459 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709490 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709516 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709567 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709593 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709645 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709682 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709710 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709738 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709767 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709793 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709923 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709959 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709992 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710018 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710045 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710070 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710125 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710153 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710182 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710233 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710264 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710300 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710327 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710354 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710382 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710420 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710451 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710477 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710505 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710561 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710591 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710617 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710644 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710870 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710905 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710932 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710958 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710984 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701007 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701132 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701372 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701594 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701707 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.701786 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.702079 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.702577 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.702679 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.702859 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.703178 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.706762 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.706762 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.706772 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.706772 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.706811 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:11:12.215931821 +0000 UTC Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707018 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707045 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707087 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707493 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707492 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707538 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707867 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711812 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711011 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711935 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711962 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711985 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712008 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707910 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.707880 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708334 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708441 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.708625 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709029 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709244 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709637 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709640 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712035 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712166 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712211 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712357 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712444 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712627 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712668 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712933 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.712994 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713106 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713153 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713247 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713325 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713354 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713431 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713472 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713509 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713554 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713593 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713631 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713668 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713711 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713748 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713784 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713820 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713858 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713900 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713938 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713949 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713977 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714021 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714058 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714093 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714129 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714224 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714270 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714395 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714473 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714548 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714722 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714760 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714904 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714937 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714958 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714979 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714999 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715019 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715038 4793 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715058 4793 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715078 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715099 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715120 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715141 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715161 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715208 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715230 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715248 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715269 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715296 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715326 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715359 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715380 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715410 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715432 4793 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715452 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715474 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715496 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715512 4793 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715526 4793 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715543 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715560 4793 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715594 4793 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715610 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715626 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715640 4793 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715654 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715669 4793 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715682 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715705 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715723 4793 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715737 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715754 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715772 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715817 4793 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715832 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715849 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.713945 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710634 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710758 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710828 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.718420 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.710998 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711851 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.711866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714412 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714653 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714789 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714817 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.709958 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.714860 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.715913 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.716066 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.716121 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.716470 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.716373 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.717264 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.717516 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.717557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.718592 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.718635 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.718697 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.718941 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.719281 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.719327 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.719413 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.719449 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.719852 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.719854 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720121 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720292 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720695 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720819 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.720958 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721010 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721090 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721211 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721432 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721451 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721688 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721731 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.721777 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.722277 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.722349 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.722422 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.722757 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.723278 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.723402 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.723488 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.723507 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.723848 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724523 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724614 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724712 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724772 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724893 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.724912 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.725127 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.725976 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.726599 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.726804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727204 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727240 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727410 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727516 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727615 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.727862 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.728097 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.728282 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.728226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.728815 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.728874 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:06.228849226 +0000 UTC m=+21.217620788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.728903 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:06.228893737 +0000 UTC m=+21.217665349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.729278 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.732526 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.733101 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.733786 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.734912 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.735161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.735951 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736148 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736211 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736231 4793 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736380 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736425 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736440 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736724 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736759 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736796 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736917 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.736964 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.737539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.737558 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.738175 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.740549 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:06.240527411 +0000 UTC m=+21.229299013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.741880 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.742310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.742763 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.742813 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.742368 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.743325 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.746341 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.751278 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.751376 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.751602 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.751730 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:06.251708111 +0000 UTC m=+21.240479623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.753527 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.753567 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.753588 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.753615 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.753748 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:06.253725748 +0000 UTC m=+21.242497260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.758452 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.758738 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.764954 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.767138 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.767173 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.767344 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768099 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768221 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768372 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768444 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768653 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768672 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768696 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768883 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768722 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768827 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768912 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768954 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.768980 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.769839 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.770272 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.770563 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.770708 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.771169 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.771635 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.771757 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.771926 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.772323 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.772807 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773599 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773602 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773668 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773747 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773876 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.774351 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.774429 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.774721 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.772975 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.775027 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.775154 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.775550 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.773161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.775895 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.777509 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.777793 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.778072 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.779630 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.779773 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.780041 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.780381 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.780804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.781272 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.781355 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.781427 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.781482 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.783474 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.784496 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.785386 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.788171 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.788409 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.789138 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.790990 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.792088 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.795860 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.796794 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.798859 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.799802 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.800505 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.801401 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.801996 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.802836 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.804821 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.805635 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.806300 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.806483 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.807844 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.808603 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.809598 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.810175 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.811103 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.812383 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.813180 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.813846 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.815036 4793 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.815178 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816533 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816669 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816934 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816955 4793 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816973 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.816988 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817006 4793 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817019 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817032 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817045 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817057 4793 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817083 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817101 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817112 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817122 4793 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817132 4793 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817142 4793 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817152 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817161 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817170 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817182 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817208 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817218 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817227 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817241 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817253 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817263 4793 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817271 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817281 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817289 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817308 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817316 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817324 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817332 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817341 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817351 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817360 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817368 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817087 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817549 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817569 4793 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817585 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817714 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817726 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817735 4793 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817746 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817755 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817786 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817796 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817789 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817805 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817843 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817858 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817871 4793 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817883 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817894 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817905 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817916 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817928 4793 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817940 4793 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817941 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817952 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817963 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817975 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817986 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.817997 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818008 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818019 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818031 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818043 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818054 4793 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818065 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818076 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818086 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818097 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818109 4793 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818199 4793 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818276 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818348 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818463 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818472 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818502 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818561 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818575 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818590 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818602 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818612 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818622 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818633 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818643 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818654 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818663 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818674 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818684 4793 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818695 4793 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818705 4793 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818716 4793 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818727 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818740 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818750 4793 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818760 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818771 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818783 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818794 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818805 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818820 4793 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818835 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818847 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818860 4793 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818870 4793 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818881 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818893 4793 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818904 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818916 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818928 4793 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818941 4793 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818951 4793 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818961 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818970 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818983 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.818991 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819000 4793 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819058 4793 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819068 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819076 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819085 4793 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819094 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819102 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819071 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819114 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819218 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819230 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819240 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819251 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819260 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819269 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819277 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819286 4793 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819295 4793 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819304 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819315 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819345 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819355 4793 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819364 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819372 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819381 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819393 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819401 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819410 4793 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819436 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819446 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819445 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819455 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819498 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819509 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.819906 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.821918 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.823094 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.823653 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.824758 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.825447 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.826337 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.826945 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.828452 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.829014 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.829066 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.830107 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.830777 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.831686 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.832681 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.833559 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.834006 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.834475 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.835445 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.836031 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.836945 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.837461 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.846343 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.856159 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.865095 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.874890 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.890733 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.905645 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.916636 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.918637 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: E0126 22:40:05.919338 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.919827 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.932881 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:05 crc kubenswrapper[4793]: I0126 22:40:05.998172 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 22:40:06 crc kubenswrapper[4793]: W0126 22:40:06.011328 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a51ab67f3d21e65993130341e224edf516e6ea92be155105bb35d9d4b2b9c500 WatchSource:0}: Error finding container a51ab67f3d21e65993130341e224edf516e6ea92be155105bb35d9d4b2b9c500: Status 404 returned error can't find the container with id a51ab67f3d21e65993130341e224edf516e6ea92be155105bb35d9d4b2b9c500 Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.012127 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 22:40:06 crc kubenswrapper[4793]: W0126 22:40:06.021619 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5b094a43182ef9056e7f82093e9b10edb325683af2fbfcdc16d10059506a1ea7 WatchSource:0}: Error finding container 5b094a43182ef9056e7f82093e9b10edb325683af2fbfcdc16d10059506a1ea7: Status 404 returned error can't find the container with id 5b094a43182ef9056e7f82093e9b10edb325683af2fbfcdc16d10059506a1ea7 Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.047273 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.324354 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.324424 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.324462 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.324482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.324501 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.324630 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.324647 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.324658 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.324718 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:07.324704213 +0000 UTC m=+22.313475725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325104 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325129 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:07.325121835 +0000 UTC m=+22.313893337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325179 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:07.325173596 +0000 UTC m=+22.313945108 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325289 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325343 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325352 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325376 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325381 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:07.325373552 +0000 UTC m=+22.314145064 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:06 crc kubenswrapper[4793]: E0126 22:40:06.325498 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:07.325469824 +0000 UTC m=+22.314241376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.712782 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:34:55.944418654 +0000 UTC Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.914690 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.914750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.914763 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ff2b657cc916f0eaa1e3f0c018756b8d04955a2841d233df4fc4b2917266b4e3"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.916350 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5b094a43182ef9056e7f82093e9b10edb325683af2fbfcdc16d10059506a1ea7"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.917828 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.917863 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a51ab67f3d21e65993130341e224edf516e6ea92be155105bb35d9d4b2b9c500"} Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.932718 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.943830 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.953642 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.964104 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.978578 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:06 crc kubenswrapper[4793]: I0126 22:40:06.996689 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.011601 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.031765 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.050003 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.064965 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.080743 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.102012 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.117443 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.134861 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.146437 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.163062 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.333395 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.333538 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333633 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:09.333576076 +0000 UTC m=+24.322347588 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.333687 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.333741 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.333765 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333861 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333883 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333903 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333943 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.333946 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:09.333939046 +0000 UTC m=+24.322710558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334083 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:09.334051469 +0000 UTC m=+24.322823031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334219 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334285 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:09.334264675 +0000 UTC m=+24.323036247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334386 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334409 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334422 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.334467 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:09.33445753 +0000 UTC m=+24.323229072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.713095 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:48:20.781609183 +0000 UTC Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.760652 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.760676 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.761158 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.761322 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.761501 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:07 crc kubenswrapper[4793]: E0126 22:40:07.762064 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.764565 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 22:40:07 crc kubenswrapper[4793]: I0126 22:40:07.765489 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 22:40:08 crc kubenswrapper[4793]: I0126 22:40:08.713695 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:29:52.7478337 +0000 UTC Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.349339 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.349468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.349515 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.349553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349587 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:13.349552612 +0000 UTC m=+28.338324154 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.349634 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349706 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349770 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:13.349752248 +0000 UTC m=+28.338523790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349779 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349804 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349823 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349874 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:13.349858401 +0000 UTC m=+28.338629943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349930 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.349968 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:13.349955593 +0000 UTC m=+28.338727145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.350043 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.350062 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.350076 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.350114 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:13.350102618 +0000 UTC m=+28.338874160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.714892 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:03:53.787173525 +0000 UTC Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.760866 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.760956 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.760989 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.761235 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.761399 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:09 crc kubenswrapper[4793]: E0126 22:40:09.761692 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.929006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce"} Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.947763 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:09Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.967363 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:09Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:09 crc kubenswrapper[4793]: I0126 22:40:09.986780 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:09Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.009169 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:10Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.029854 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:10Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.052416 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:10Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.073913 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:10Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.095738 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:10Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:10 crc kubenswrapper[4793]: I0126 22:40:10.715707 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:23:46.196366185 +0000 UTC Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.596856 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.599159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.599237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.599256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.599363 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.608750 4793 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.609347 4793 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.610759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.610796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.610812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.610835 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.610853 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.634411 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:11Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.639633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.639837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.639969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.640098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.640271 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.657970 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:11Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.663627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.663843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.663995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.664132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.664289 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.679357 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:11Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.684244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.684297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.684320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.684350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.684372 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.714757 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:11Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.716660 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:29:05.742343004 +0000 UTC Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.721310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.721357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.721371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.721394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.721409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.744314 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:11Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.745026 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.747534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.747754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.747989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.748512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.748876 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.760783 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.760849 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.760902 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.761607 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.761492 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:11 crc kubenswrapper[4793]: E0126 22:40:11.761823 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.852462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.852507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.852523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.852547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.852562 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.955444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.955511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.955537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.955571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:11 crc kubenswrapper[4793]: I0126 22:40:11.955594 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:11Z","lastTransitionTime":"2026-01-26T22:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.059588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.059658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.059677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.059708 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.059730 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.066912 4793 csr.go:261] certificate signing request csr-x4qqb is approved, waiting to be issued Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.084022 4793 csr.go:257] certificate signing request csr-x4qqb is issued Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.123470 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-jwwcw"] Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.123837 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.126063 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.126561 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.127559 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.157039 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.162086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.162111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.162119 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.162134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.162143 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.203540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68b05073-b99f-4026-986a-0b8a0f7be18a-hosts-file\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.203591 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxc8\" (UniqueName: \"kubernetes.io/projected/68b05073-b99f-4026-986a-0b8a0f7be18a-kube-api-access-qrxc8\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.238809 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.264067 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.264096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.264104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.264119 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.264129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.285505 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.304588 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68b05073-b99f-4026-986a-0b8a0f7be18a-hosts-file\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.304625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrxc8\" (UniqueName: \"kubernetes.io/projected/68b05073-b99f-4026-986a-0b8a0f7be18a-kube-api-access-qrxc8\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.304890 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68b05073-b99f-4026-986a-0b8a0f7be18a-hosts-file\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.316676 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.328216 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrxc8\" (UniqueName: \"kubernetes.io/projected/68b05073-b99f-4026-986a-0b8a0f7be18a-kube-api-access-qrxc8\") pod \"node-resolver-jwwcw\" (UID: \"68b05073-b99f-4026-986a-0b8a0f7be18a\") " pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.353731 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.369687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.369962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.370047 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.370124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.370205 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.386016 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.407488 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.420050 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.437009 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.445773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jwwcw" Jan 26 22:40:12 crc kubenswrapper[4793]: W0126 22:40:12.461397 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68b05073_b99f_4026_986a_0b8a0f7be18a.slice/crio-77b96212d18d7c09b13289563d020820b7c0f525cef8a89f60de3415541caf4b WatchSource:0}: Error finding container 77b96212d18d7c09b13289563d020820b7c0f525cef8a89f60de3415541caf4b: Status 404 returned error can't find the container with id 77b96212d18d7c09b13289563d020820b7c0f525cef8a89f60de3415541caf4b Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.472940 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.472998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.473009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.473024 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.473054 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.575540 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.575599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.575610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.575630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.575647 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.678382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.678435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.678449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.678479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.678495 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.717813 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:15:39.28029654 +0000 UTC Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.781203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.781245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.781255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.781270 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.781279 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.884426 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.884459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.884467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.884483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.884494 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.939654 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jwwcw" event={"ID":"68b05073-b99f-4026-986a-0b8a0f7be18a","Type":"ContainerStarted","Data":"05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.939697 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jwwcw" event={"ID":"68b05073-b99f-4026-986a-0b8a0f7be18a","Type":"ContainerStarted","Data":"77b96212d18d7c09b13289563d020820b7c0f525cef8a89f60de3415541caf4b"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.967178 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.987594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.987649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.987661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.987684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.987696 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:12Z","lastTransitionTime":"2026-01-26T22:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:12 crc kubenswrapper[4793]: I0126 22:40:12.992125 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:12Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.007909 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.020284 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.039301 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.062515 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5htjl"] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.062930 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.064398 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-l5qgq"] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.064719 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.065462 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.068275 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwtbk"] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.068919 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ptkmd"] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.069110 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.070007 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.070209 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.071390 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.071451 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.073084 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.073545 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.074400 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.074609 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.076871 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.077203 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.077200 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.077124 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.077814 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078017 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078165 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078346 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078467 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078671 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.078865 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.083242 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.085458 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 22:35:12 +0000 UTC, rotation deadline is 2026-12-16 11:50:49.74805386 +0000 UTC Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.085483 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7765h10m36.662573167s for next certificate rotation Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.089821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.089851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.089862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.089881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.089893 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.096369 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.112925 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.127260 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.140415 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.151891 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.162296 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.173984 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.186496 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.192752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.192817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.192831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.192851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.192869 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.198606 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.209220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217619 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-conf-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217661 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217679 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cnibin\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217814 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-kubelet\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217857 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217900 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-hostroot\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-system-cni-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.217988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxzr2\" (UniqueName: \"kubernetes.io/projected/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-kube-api-access-qxzr2\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218035 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22a78b43-c8a5-48e0-8fe3-89bc7b449391-mcd-auth-proxy-config\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218068 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cnibin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218087 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218123 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-multus-certs\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218142 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-etc-kubernetes\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218171 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cni-binary-copy\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218202 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218223 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-binary-copy\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218238 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9ws\" (UniqueName: \"kubernetes.io/projected/22a78b43-c8a5-48e0-8fe3-89bc7b449391-kube-api-access-ml9ws\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218259 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-system-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218278 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218298 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218335 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-multus\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218352 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4df2w\" (UniqueName: \"kubernetes.io/projected/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-kube-api-access-4df2w\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218371 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22a78b43-c8a5-48e0-8fe3-89bc7b449391-rootfs\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218389 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-bin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218406 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218423 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218445 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-os-release\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218471 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-daemon-config\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218485 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-socket-dir-parent\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218538 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-k8s-cni-cncf-io\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218554 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218572 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218587 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218607 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218622 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218637 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97spd\" (UniqueName: \"kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22a78b43-c8a5-48e0-8fe3-89bc7b449391-proxy-tls\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218689 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-os-release\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218719 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-netns\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218734 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218748 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218764 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.218983 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.243823 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.263946 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.280418 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.295498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.295560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.295577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.295602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.295621 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.297777 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.316062 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-etc-kubernetes\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319556 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319583 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-multus-certs\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319616 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cni-binary-copy\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319635 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319656 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-binary-copy\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319676 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319696 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml9ws\" (UniqueName: \"kubernetes.io/projected/22a78b43-c8a5-48e0-8fe3-89bc7b449391-kube-api-access-ml9ws\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319694 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-etc-kubernetes\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319776 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319813 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-system-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319826 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-multus-certs\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319717 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-system-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319869 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319892 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319913 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-multus\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319937 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4df2w\" (UniqueName: \"kubernetes.io/projected/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-kube-api-access-4df2w\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22a78b43-c8a5-48e0-8fe3-89bc7b449391-rootfs\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.319982 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-bin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320020 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320042 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-os-release\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320069 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-daemon-config\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320088 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320107 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320130 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320153 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-socket-dir-parent\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-k8s-cni-cncf-io\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320208 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320230 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320272 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22a78b43-c8a5-48e0-8fe3-89bc7b449391-proxy-tls\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320291 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320311 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320330 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97spd\" (UniqueName: \"kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320368 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-os-release\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320413 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-netns\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320451 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320469 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-conf-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320490 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320530 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cnibin\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320551 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320588 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-kubelet\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320606 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-hostroot\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320624 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-system-cni-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320648 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxzr2\" (UniqueName: \"kubernetes.io/projected/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-kube-api-access-qxzr2\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22a78b43-c8a5-48e0-8fe3-89bc7b449391-mcd-auth-proxy-config\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320687 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cnibin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320754 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cnibin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.320770 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-binary-copy\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321061 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321117 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-os-release\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321481 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321585 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321623 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321650 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-multus\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321701 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-cni-binary-copy\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22a78b43-c8a5-48e0-8fe3-89bc7b449391-rootfs\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321822 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321826 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321875 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-socket-dir-parent\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-cni-bin\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321904 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.321947 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322100 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322167 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322240 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-cni-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322307 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-k8s-cni-cncf-io\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322432 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322428 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322594 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-run-netns\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322607 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-conf-dir\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322625 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-cnibin\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322640 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-host-var-lib-kubelet\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322691 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-multus-daemon-config\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322755 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-os-release\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322714 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.322902 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.323053 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-hostroot\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.323038 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-system-cni-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.323151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.323320 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.323630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22a78b43-c8a5-48e0-8fe3-89bc7b449391-mcd-auth-proxy-config\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.324724 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22a78b43-c8a5-48e0-8fe3-89bc7b449391-proxy-tls\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.326305 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.337425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml9ws\" (UniqueName: \"kubernetes.io/projected/22a78b43-c8a5-48e0-8fe3-89bc7b449391-kube-api-access-ml9ws\") pod \"machine-config-daemon-5htjl\" (UID: \"22a78b43-c8a5-48e0-8fe3-89bc7b449391\") " pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.337569 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.342797 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4df2w\" (UniqueName: \"kubernetes.io/projected/2e6daa0d-7641-46e1-b9ab-8479c1cd00d6-kube-api-access-4df2w\") pod \"multus-l5qgq\" (UID: \"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\") " pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.345331 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97spd\" (UniqueName: \"kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd\") pod \"ovnkube-node-pwtbk\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.359816 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxzr2\" (UniqueName: \"kubernetes.io/projected/b1a14c1f-430a-4e5b-bd6a-01a959edbab1-kube-api-access-qxzr2\") pod \"multus-additional-cni-plugins-ptkmd\" (UID: \"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\") " pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.380087 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.386742 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-l5qgq" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.396502 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.397800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.397855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.397868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.397883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.397893 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.400690 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" Jan 26 22:40:13 crc kubenswrapper[4793]: W0126 22:40:13.404830 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e6daa0d_7641_46e1_b9ab_8479c1cd00d6.slice/crio-fac125aa81c41d4d153ea4abcb2a0c8c5ff6f23b449fe33794c9848a457352d7 WatchSource:0}: Error finding container fac125aa81c41d4d153ea4abcb2a0c8c5ff6f23b449fe33794c9848a457352d7: Status 404 returned error can't find the container with id fac125aa81c41d4d153ea4abcb2a0c8c5ff6f23b449fe33794c9848a457352d7 Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.421934 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.422085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422103 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:21.422078581 +0000 UTC m=+36.410850093 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.422233 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.422280 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422297 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422361 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422387 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422416 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:21.42240154 +0000 UTC m=+36.411173072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422426 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422566 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:21.422552234 +0000 UTC m=+36.411323746 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.422313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422845 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422896 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422907 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422920 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422940 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:21.422927105 +0000 UTC m=+36.411698627 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.422958 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:21.422949996 +0000 UTC m=+36.411721518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.506158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.506247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.506263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.506290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.506305 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.609655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.609726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.609738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.609782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.609794 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.712678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.712718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.712728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.712748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.712763 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.718144 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:41:37.505424542 +0000 UTC Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.760419 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.760501 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.760915 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.761033 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.761155 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:13 crc kubenswrapper[4793]: E0126 22:40:13.765649 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.815649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.815693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.815702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.815718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.815730 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.919645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.919706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.919722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.919745 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.919761 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:13Z","lastTransitionTime":"2026-01-26T22:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.951106 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerStarted","Data":"a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.951229 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerStarted","Data":"fac125aa81c41d4d153ea4abcb2a0c8c5ff6f23b449fe33794c9848a457352d7"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.953284 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.953364 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.953386 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"2eef6063c20fd0b6e4d20236ef5e071a6fe8bc105d178308dc384b0936022a50"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.955165 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" exitCode=0 Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.955220 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.955257 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"972d2d9dd7c3ad94fb7b559afde32870a71392b37532ff0117112547580522c5"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.960838 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerStarted","Data":"9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.960908 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerStarted","Data":"3edf8f863f7f4b62e357bcbfbaf64c553b35bbb4d8d0ca31b2b304dba3b19de6"} Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.976077 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:13 crc kubenswrapper[4793]: I0126 22:40:13.992668 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:13Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.009683 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.022783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.022843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.022858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.022901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.022916 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.027609 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.041804 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.059418 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.080747 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.095716 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.111846 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.125168 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.126510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.126576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.126597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.126626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.126646 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.145575 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.160258 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.189320 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.210847 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.229314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.229368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.229388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.229417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.229438 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.231396 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.249901 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.260216 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.276604 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.289781 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.302293 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.315376 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.332557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.332624 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.332644 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.332678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.332697 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.342747 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.400919 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.417529 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.430474 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.440851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.440924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.440936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.440957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.440969 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.448119 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.543682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.543761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.543782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.543813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.543835 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.646215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.646283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.646303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.646331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.646349 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.719019 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:46:12.606221069 +0000 UTC Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.748877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.748947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.748969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.748994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.749014 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.852088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.852154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.852174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.852227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.852246 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.957506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.957583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.957603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.957632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.957654 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:14Z","lastTransitionTime":"2026-01-26T22:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.970684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.970750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.970764 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.970780 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.970800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.972876 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b" exitCode=0 Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.972996 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b"} Jan 26 22:40:14 crc kubenswrapper[4793]: I0126 22:40:14.993243 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:14Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.000802 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-cjhd7"] Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.001207 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.003123 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.003536 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.003636 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.004354 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.016689 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.032145 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.047648 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.062449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.062536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.062575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.062600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.062618 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.066183 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.087993 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.103646 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.117909 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.142223 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.150497 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24361290-ee1e-4424-b0f0-27d0b8f013ea-host\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.150628 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgrbl\" (UniqueName: \"kubernetes.io/projected/24361290-ee1e-4424-b0f0-27d0b8f013ea-kube-api-access-fgrbl\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.150711 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/24361290-ee1e-4424-b0f0-27d0b8f013ea-serviceca\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.162207 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.166574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.166628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.166642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.166665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.166690 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.181084 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.195013 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.221863 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.236256 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.252129 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/24361290-ee1e-4424-b0f0-27d0b8f013ea-serviceca\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.252259 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24361290-ee1e-4424-b0f0-27d0b8f013ea-host\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.252311 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgrbl\" (UniqueName: \"kubernetes.io/projected/24361290-ee1e-4424-b0f0-27d0b8f013ea-kube-api-access-fgrbl\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.252319 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.252522 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24361290-ee1e-4424-b0f0-27d0b8f013ea-host\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.253650 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/24361290-ee1e-4424-b0f0-27d0b8f013ea-serviceca\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.266432 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.270082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.270255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.270360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.270458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.270577 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.278815 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgrbl\" (UniqueName: \"kubernetes.io/projected/24361290-ee1e-4424-b0f0-27d0b8f013ea-kube-api-access-fgrbl\") pod \"node-ca-cjhd7\" (UID: \"24361290-ee1e-4424-b0f0-27d0b8f013ea\") " pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.281035 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.302795 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.328427 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cjhd7" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.331942 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.359540 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.386681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.386726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.386738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.386756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.386768 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.389964 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.409144 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.430356 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.442183 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.454157 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.466581 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.482288 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.489710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.489768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.489784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.489809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.489828 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.517995 4793 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 22:40:15 crc kubenswrapper[4793]: W0126 22:40:15.520161 4793 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 26 22:40:15 crc kubenswrapper[4793]: W0126 22:40:15.520197 4793 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 22:40:15 crc kubenswrapper[4793]: W0126 22:40:15.520700 4793 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 26 22:40:15 crc kubenswrapper[4793]: W0126 22:40:15.521626 4793 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.592646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.592712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.592732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.592764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.592787 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.695277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.695344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.695364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.695395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.696300 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.719358 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:41:06.657666969 +0000 UTC Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.760260 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:15 crc kubenswrapper[4793]: E0126 22:40:15.760416 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.760510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:15 crc kubenswrapper[4793]: E0126 22:40:15.760776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.760510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:15 crc kubenswrapper[4793]: E0126 22:40:15.760985 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.778247 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.793819 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.799462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.799515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.799527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.799551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.799564 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.808350 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.823498 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.853233 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.884782 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.901904 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.903222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.903280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.903291 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.903312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.903329 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:15Z","lastTransitionTime":"2026-01-26T22:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.921244 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.938489 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.956831 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.975975 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.981622 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerStarted","Data":"44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.985226 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cjhd7" event={"ID":"24361290-ee1e-4424-b0f0-27d0b8f013ea","Type":"ContainerStarted","Data":"5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.985298 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cjhd7" event={"ID":"24361290-ee1e-4424-b0f0-27d0b8f013ea","Type":"ContainerStarted","Data":"81a77aa776488768f9f9d8e2ee1e3b47966a0c6dffef06988f167ce3533f71b1"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.989469 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:40:15 crc kubenswrapper[4793]: I0126 22:40:15.998825 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:15Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.006178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.006227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.006255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.006276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.006288 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.018464 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.036860 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.051851 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.072730 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.093355 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.110246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.110632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.110799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.110974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.111102 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.114577 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.128469 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.148640 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.169741 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.186753 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.201058 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.214430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.214490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.214506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.214533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.214550 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.222285 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.238925 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.258322 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.270702 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.283483 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:16Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.316949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.316994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.317008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.317026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.317038 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.335158 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.419769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.419827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.419836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.419851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.419879 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.523175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.523283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.523303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.523338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.523357 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.530258 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.627032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.627107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.627125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.627150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.627168 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.720313 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:09:45.993049834 +0000 UTC Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.731733 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.731803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.731823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.731853 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.731873 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.794499 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.836436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.836524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.836545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.836575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.836595 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.940137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.940263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.940289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.940319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:16 crc kubenswrapper[4793]: I0126 22:40:16.940339 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:16Z","lastTransitionTime":"2026-01-26T22:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.016403 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.039134 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.044023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.044068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.044084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.044113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.044131 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.059049 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.075484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.087048 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.099616 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.125355 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.147587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.147656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.147680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.147710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.147729 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.150672 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.171126 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.196762 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.219127 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.237094 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.251398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.251438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.251448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.251468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.251479 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.253685 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.270317 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.291268 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:17Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.355025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.355077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.355094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.355119 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.355141 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.458619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.459240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.459274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.459309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.459333 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.564692 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.564749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.564767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.564794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.564814 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.668157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.668240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.668259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.668284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.668302 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.721411 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:53:13.309234562 +0000 UTC Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.760893 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.760987 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.761137 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:17 crc kubenswrapper[4793]: E0126 22:40:17.761385 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:17 crc kubenswrapper[4793]: E0126 22:40:17.761557 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:17 crc kubenswrapper[4793]: E0126 22:40:17.761688 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.770826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.770877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.770895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.770921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.771022 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.874700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.874750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.874760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.874777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.874789 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.977877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.977931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.977942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.977961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:17 crc kubenswrapper[4793]: I0126 22:40:17.977973 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:17Z","lastTransitionTime":"2026-01-26T22:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.002257 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.005039 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847" exitCode=0 Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.005102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.036147 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.056340 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.075181 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.080262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.080308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.080327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.080356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.080376 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.097950 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.114805 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.138229 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.154382 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.169677 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183502 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.183871 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.220087 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.239960 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.257425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.271633 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.283006 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:18Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.286900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.286955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.286977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.287009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.287062 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.392409 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.392464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.392479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.392506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.392522 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.495377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.495431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.495441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.495461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.495472 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.599058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.599168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.599178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.599240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.599253 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.701900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.701960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.701973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.701998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.702012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.722594 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:04:17.632366313 +0000 UTC Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.805149 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.805252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.805270 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.805293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.805308 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.908602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.908680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.908699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.908728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:18 crc kubenswrapper[4793]: I0126 22:40:18.908747 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:18Z","lastTransitionTime":"2026-01-26T22:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.011819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.011858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.011868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.011888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.011899 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.012557 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4" exitCode=0 Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.012599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.033420 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.048243 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.062134 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.080648 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.106232 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.115109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.115167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.115213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.115253 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.115275 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.128859 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.148129 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.166554 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.184181 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.208163 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.220312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.220380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.220400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.220429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.220447 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.222353 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.235454 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.248985 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.268064 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.324742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.324798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.324813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.324837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.324851 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.429448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.429591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.430021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.430064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.430086 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.536426 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.536924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.536945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.536981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.537005 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.643915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.643972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.643986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.644015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.644030 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.723079 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:03:13.09412886 +0000 UTC Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.753467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.753531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.753551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.753578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.753600 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.760493 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.760551 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:19 crc kubenswrapper[4793]: E0126 22:40:19.760656 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:19 crc kubenswrapper[4793]: E0126 22:40:19.760776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.761005 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:19 crc kubenswrapper[4793]: E0126 22:40:19.761333 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.794754 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.818283 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.844155 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.857751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.857811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.857831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.857859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.857883 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.865638 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.887226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.903949 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.924881 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.944893 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.960584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.960633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.960660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.960697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.960723 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:19Z","lastTransitionTime":"2026-01-26T22:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.962026 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:19 crc kubenswrapper[4793]: I0126 22:40:19.983827 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:19Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.008848 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.022504 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.023032 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.023083 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.026635 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644" exitCode=0 Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.026699 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.031329 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.049315 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.062968 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.064851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.064904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.064924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.064953 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.064972 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.066147 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.072577 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.090551 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.121933 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.141694 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.157461 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.167924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.167959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.167975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.167999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.168016 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.176148 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.193548 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.210668 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.227917 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.244282 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.262962 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.271115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.271220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.271243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.271274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.271294 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.278220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.289909 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.307645 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.331152 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.360833 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:20Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.374913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.374959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.374971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.374989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.375002 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.478463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.478519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.478542 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.478574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.478597 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.583060 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.583658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.583862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.584075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.584368 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.687947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.688021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.688043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.688073 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.688093 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.723532 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:06:59.528518634 +0000 UTC Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.790646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.790675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.790683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.790698 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.790709 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.893599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.893676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.893697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.893732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.893752 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.995961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.996171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.996314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.996396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:20 crc kubenswrapper[4793]: I0126 22:40:20.996457 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:20Z","lastTransitionTime":"2026-01-26T22:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.032422 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706" exitCode=0 Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.032773 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.032975 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.055292 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.068786 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.085882 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.099289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.099323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.099331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.099348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.099358 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.103090 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.117790 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.132813 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.147409 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.160005 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.172523 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.185975 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.201060 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.203179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.203384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.203465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.203528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.203589 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.212897 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.223945 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.242854 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:21Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.306106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.306151 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.306163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.306183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.306221 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.408463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.408502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.408510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.408525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.408537 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.444789 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.444907 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.444941 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:40:37.444922684 +0000 UTC m=+52.433694196 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.444965 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.444995 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.445017 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445064 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445088 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445099 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445130 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445141 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445149 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445157 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:37.4451421 +0000 UTC m=+52.433913612 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445175 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:37.445167531 +0000 UTC m=+52.433939043 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445108 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445299 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:37.445279544 +0000 UTC m=+52.434051086 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445092 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.445360 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:40:37.445347426 +0000 UTC m=+52.434118978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.511866 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.511941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.511965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.511997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.512022 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.615766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.615833 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.615852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.615880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.615906 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.719615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.719695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.719715 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.719744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.719768 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.723781 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:50:33.911967625 +0000 UTC Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.760804 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.760894 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.760967 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.761006 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.761231 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:21 crc kubenswrapper[4793]: E0126 22:40:21.761346 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.822794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.822851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.822862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.822881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.822894 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.925627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.925669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.925680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.925704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:21 crc kubenswrapper[4793]: I0126 22:40:21.925721 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:21Z","lastTransitionTime":"2026-01-26T22:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.029183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.029276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.029294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.029323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.029342 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.031402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.031461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.031483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.031508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.031529 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.041463 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1a14c1f-430a-4e5b-bd6a-01a959edbab1" containerID="d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0" exitCode=0 Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.041541 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerDied","Data":"d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0"} Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.060019 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.065711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.065782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.065808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.065843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.065868 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.068752 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.083914 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.085570 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.088914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.088943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.088956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.088976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.088993 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.102772 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.113436 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.120294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.120341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.120351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.120371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.120383 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.137054 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.143172 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.149178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.149245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.149260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.149283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.149302 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.181619 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.187080 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: E0126 22:40:22.187280 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202555 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202865 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.202939 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.226345 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.245813 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.261983 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.296106 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.305726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.305757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.305765 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.305781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.305791 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.312065 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.324569 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.337450 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.347881 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:22Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.408255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.408297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.408306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.408322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.408335 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.510422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.510463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.510472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.510494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.510506 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.613775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.613815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.613826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.613840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.613851 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.717085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.717143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.717162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.717206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.717225 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.724680 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:49:45.213249184 +0000 UTC Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.820180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.820257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.820267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.820284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.820296 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.923526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.923580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.923590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.923606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:22 crc kubenswrapper[4793]: I0126 22:40:22.923616 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:22Z","lastTransitionTime":"2026-01-26T22:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.026412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.026463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.026474 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.026494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.026509 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.046638 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/0.log" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.050919 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c" exitCode=1 Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.051047 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.052666 4793 scope.go:117] "RemoveContainer" containerID="6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.057923 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" event={"ID":"b1a14c1f-430a-4e5b-bd6a-01a959edbab1","Type":"ContainerStarted","Data":"a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.073662 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.089991 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.108214 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.123911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.129508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.129548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.129560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.129578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.129611 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.138672 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.153747 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.171317 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.195106 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 22:40:22.901414 6050 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 22:40:22.901450 6050 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:22.901483 6050 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 22:40:22.901503 6050 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 22:40:22.901510 6050 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 22:40:22.901531 6050 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 22:40:22.901559 6050 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 22:40:22.901565 6050 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:22.901588 6050 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 22:40:22.901606 6050 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 22:40:22.901653 6050 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 22:40:22.901691 6050 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 22:40:22.901714 6050 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 22:40:22.901730 6050 factory.go:656] Stopping watch factory\\\\nI0126 22:40:22.901747 6050 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.211838 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.227653 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.233576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.233615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.233627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.233646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.233661 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.243283 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.261561 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.275302 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.288693 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.304701 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.318447 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.331041 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.336042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.336085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.336099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.336119 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.336130 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.341989 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.361333 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 22:40:22.901414 6050 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 22:40:22.901450 6050 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:22.901483 6050 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 22:40:22.901503 6050 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 22:40:22.901510 6050 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 22:40:22.901531 6050 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 22:40:22.901559 6050 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 22:40:22.901565 6050 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:22.901588 6050 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 22:40:22.901606 6050 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 22:40:22.901653 6050 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 22:40:22.901691 6050 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 22:40:22.901714 6050 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 22:40:22.901730 6050 factory.go:656] Stopping watch factory\\\\nI0126 22:40:22.901747 6050 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.375802 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.390533 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.405614 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.421545 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.437758 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.439537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.439564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.439574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.439593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.439605 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.458291 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.477895 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.494050 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.515491 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.542599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.542653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.542665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.542686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.542700 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.645672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.645729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.645741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.645764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.645777 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.725439 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:47:45.564996665 +0000 UTC Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.748652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.748700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.748709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.748728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.748740 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.762800 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:23 crc kubenswrapper[4793]: E0126 22:40:23.762941 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.763344 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:23 crc kubenswrapper[4793]: E0126 22:40:23.763428 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.763498 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:23 crc kubenswrapper[4793]: E0126 22:40:23.763590 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.851668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.851720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.851731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.851751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.851765 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.954830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.954876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.954886 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.954902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:23 crc kubenswrapper[4793]: I0126 22:40:23.954915 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:23Z","lastTransitionTime":"2026-01-26T22:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.058657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.058714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.058727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.058748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.058762 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.063546 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/1.log" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.064503 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/0.log" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.069097 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2" exitCode=1 Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.069154 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.069247 4793 scope.go:117] "RemoveContainer" containerID="6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.070535 4793 scope.go:117] "RemoveContainer" containerID="5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2" Jan 26 22:40:24 crc kubenswrapper[4793]: E0126 22:40:24.070836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.091821 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.104833 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.123884 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.137291 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.155560 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.161118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.161232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.161260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.161295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.161314 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.190651 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6da69a68e191910fa526697d2561b385a8cc87e0a79b99f66ec85845dcf9f64c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 22:40:22.901414 6050 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 22:40:22.901450 6050 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:22.901483 6050 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 22:40:22.901503 6050 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 22:40:22.901510 6050 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 22:40:22.901531 6050 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 22:40:22.901559 6050 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 22:40:22.901565 6050 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:22.901588 6050 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 22:40:22.901606 6050 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 22:40:22.901653 6050 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 22:40:22.901691 6050 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 22:40:22.901714 6050 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 22:40:22.901730 6050 factory.go:656] Stopping watch factory\\\\nI0126 22:40:22.901747 6050 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.208656 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.228324 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.245749 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.258701 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.263990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.264035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.264045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.264069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.264083 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.272390 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.287489 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.299260 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.311331 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:24Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.366931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.367034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.367044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.367078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.367091 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.470550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.470602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.470614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.470634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.470649 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.573711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.573802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.573828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.573914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.573949 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.677375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.677429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.677446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.677473 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.677492 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.727035 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:17:51.912249207 +0000 UTC Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.780899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.780968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.780987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.781016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.781037 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.884359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.884465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.884497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.884543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.884579 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.990516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.990574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.990592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.990615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:24 crc kubenswrapper[4793]: I0126 22:40:24.990635 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:24Z","lastTransitionTime":"2026-01-26T22:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.075104 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/1.log" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.079903 4793 scope.go:117] "RemoveContainer" containerID="5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2" Jan 26 22:40:25 crc kubenswrapper[4793]: E0126 22:40:25.080218 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.093245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.093293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.093308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.093329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.093343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.102455 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.123582 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.141109 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.160079 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.191556 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.196570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.196659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.196680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.196711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.196739 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.242931 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.265437 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.280846 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.294984 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.299619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.299684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.299699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.299722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.299737 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.308299 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.319612 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.339136 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.362870 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.379542 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.404823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.404906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.404924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.404950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.404969 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.508908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.508987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.509006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.509036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.509061 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.613484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.613572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.613592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.613625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.613647 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.616267 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh"] Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.617055 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.621889 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.622076 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.645953 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.662051 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.683550 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.705057 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.714239 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.714387 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m59gn\" (UniqueName: \"kubernetes.io/projected/b2178ad7-9b5f-4304-810f-f35026a9a27c-kube-api-access-m59gn\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.714454 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.714491 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.716582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.716628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.716645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.716673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.716693 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.722491 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.727506 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:04:25.068888937 +0000 UTC Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.739128 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.762517 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.762650 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:25 crc kubenswrapper[4793]: E0126 22:40:25.762690 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.762783 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:25 crc kubenswrapper[4793]: E0126 22:40:25.762839 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:25 crc kubenswrapper[4793]: E0126 22:40:25.763038 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.770314 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.788749 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.805409 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.815222 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m59gn\" (UniqueName: \"kubernetes.io/projected/b2178ad7-9b5f-4304-810f-f35026a9a27c-kube-api-access-m59gn\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.815322 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.815362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.815452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.816152 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.816410 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2178ad7-9b5f-4304-810f-f35026a9a27c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.824587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.824646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.824661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.824684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.824700 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.828929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2178ad7-9b5f-4304-810f-f35026a9a27c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.832685 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.840823 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m59gn\" (UniqueName: \"kubernetes.io/projected/b2178ad7-9b5f-4304-810f-f35026a9a27c-kube-api-access-m59gn\") pod \"ovnkube-control-plane-749d76644c-wb7lh\" (UID: \"b2178ad7-9b5f-4304-810f-f35026a9a27c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.854295 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.877444 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.894656 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.913681 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.928355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.928425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.928453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.928490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.928512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:25Z","lastTransitionTime":"2026-01-26T22:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.931978 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.942253 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.949215 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: W0126 22:40:25.969240 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2178ad7_9b5f_4304_810f_f35026a9a27c.slice/crio-d7570436685bca503dad1c2b342de37a210b52b607b75e777c45f350f8cf8a04 WatchSource:0}: Error finding container d7570436685bca503dad1c2b342de37a210b52b607b75e777c45f350f8cf8a04: Status 404 returned error can't find the container with id d7570436685bca503dad1c2b342de37a210b52b607b75e777c45f350f8cf8a04 Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.973044 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:25 crc kubenswrapper[4793]: I0126 22:40:25.993724 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:25Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.009722 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.025031 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.033707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.033756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.033774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.033802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.033822 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.038043 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.051917 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.063436 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.076361 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.084097 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" event={"ID":"b2178ad7-9b5f-4304-810f-f35026a9a27c","Type":"ContainerStarted","Data":"d7570436685bca503dad1c2b342de37a210b52b607b75e777c45f350f8cf8a04"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.091726 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.104264 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.115744 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.125220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.137580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.137786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.137862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.137927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.137984 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.138455 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.160941 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.240362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.240396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.240405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.240420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.240430 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.342687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.343046 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.343147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.343251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.343382 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.447634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.447892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.448214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.448286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.448343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.551697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.551785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.551810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.551846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.551866 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.655235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.655320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.655352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.655395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.655425 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.728175 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:38:55.316107526 +0000 UTC Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.748661 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7rl9w"] Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.749436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: E0126 22:40:26.749526 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.758945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.759000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.759029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.759059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.759083 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.780962 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.799640 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.816119 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.829841 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.829959 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d8q5\" (UniqueName: \"kubernetes.io/projected/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-kube-api-access-9d8q5\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.841067 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.860695 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.861671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.861708 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.861717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.861732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.861744 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.873790 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.890077 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.908673 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.925204 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.931480 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: E0126 22:40:26.931676 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:26 crc kubenswrapper[4793]: E0126 22:40:26.931752 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:27.431727959 +0000 UTC m=+42.420499471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.931673 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d8q5\" (UniqueName: \"kubernetes.io/projected/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-kube-api-access-9d8q5\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.942309 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.957784 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.960682 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d8q5\" (UniqueName: \"kubernetes.io/projected/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-kube-api-access-9d8q5\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.963867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.963905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.963920 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.963942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.963953 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:26Z","lastTransitionTime":"2026-01-26T22:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:26 crc kubenswrapper[4793]: I0126 22:40:26.983713 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:26Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.002775 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.023484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.044845 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.064153 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.066156 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.066226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.066240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.066264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.066280 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.090649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" event={"ID":"b2178ad7-9b5f-4304-810f-f35026a9a27c","Type":"ContainerStarted","Data":"ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.090717 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" event={"ID":"b2178ad7-9b5f-4304-810f-f35026a9a27c","Type":"ContainerStarted","Data":"263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.113852 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.126810 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.143171 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.165826 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.168682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.168716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.168728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.168746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.168757 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.186590 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.207539 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.263789 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.272747 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.272933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.273088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.273224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.273336 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.301856 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.325919 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.348769 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.368250 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.376694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.376740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.376758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.376789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.376809 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.387810 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.406484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.422356 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.438232 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.438578 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.438751 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:28.438708295 +0000 UTC m=+43.427480077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.444957 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.462337 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:27Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.480200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.480243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.480256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.480275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.480291 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.583895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.584390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.584547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.584755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.584918 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.687794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.687840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.687848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.687865 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.687880 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.728365 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 22:34:16.887113603 +0000 UTC Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.760184 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.760320 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.760365 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.760425 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.760619 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.760654 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.760909 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:27 crc kubenswrapper[4793]: E0126 22:40:27.760958 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.791126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.791168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.791177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.791229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.791241 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.894013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.894086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.894105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.894134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.894155 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.997801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.997853 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.997867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.997887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:27 crc kubenswrapper[4793]: I0126 22:40:27.997902 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:27Z","lastTransitionTime":"2026-01-26T22:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.099617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.099668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.099681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.099701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.099713 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.203229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.203329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.203350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.203382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.203402 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.307862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.307933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.307957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.307990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.308013 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.411477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.411537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.411548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.411572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.411590 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.451060 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:28 crc kubenswrapper[4793]: E0126 22:40:28.451356 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:28 crc kubenswrapper[4793]: E0126 22:40:28.451458 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:30.451430805 +0000 UTC m=+45.440202347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.514392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.514468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.514484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.514506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.514521 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.617437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.617505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.617525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.617544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.617561 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.721265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.721366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.721428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.721467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.721495 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.729421 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:58:30.535059824 +0000 UTC Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.825182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.825286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.825309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.825337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.825360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.929296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.929368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.929386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.929418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:28 crc kubenswrapper[4793]: I0126 22:40:28.929439 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:28Z","lastTransitionTime":"2026-01-26T22:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.032519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.032617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.032638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.032663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.032712 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.136269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.136344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.136391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.136421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.136442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.240268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.240329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.240347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.240374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.240425 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.342836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.342889 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.342908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.342933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.342951 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.445877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.445926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.445939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.445961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.445974 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.549870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.549935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.549955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.549982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.550001 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.652988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.653064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.653083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.653110 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.653131 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.730380 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:20:45.584363756 +0000 UTC Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.756084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.756179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.756232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.756262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.756280 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.760564 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:29 crc kubenswrapper[4793]: E0126 22:40:29.760724 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.760824 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:29 crc kubenswrapper[4793]: E0126 22:40:29.760923 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.761435 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:29 crc kubenswrapper[4793]: E0126 22:40:29.761739 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.761503 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:29 crc kubenswrapper[4793]: E0126 22:40:29.762067 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.861013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.861391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.861411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.861442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.861460 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.965200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.965242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.965251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.965268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:29 crc kubenswrapper[4793]: I0126 22:40:29.965281 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:29Z","lastTransitionTime":"2026-01-26T22:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.067952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.068027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.068047 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.068079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.068104 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.174468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.174546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.174571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.174608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.174637 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.277967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.278025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.278044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.278080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.278102 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.382448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.382534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.382556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.382590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.382614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.480763 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:30 crc kubenswrapper[4793]: E0126 22:40:30.481026 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:30 crc kubenswrapper[4793]: E0126 22:40:30.481164 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:34.481129853 +0000 UTC m=+49.469901405 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.485903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.485946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.485959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.485982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.485998 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.589005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.589092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.589118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.589152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.589175 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.693057 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.693130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.693149 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.693179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.693240 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.730772 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:12:55.001686266 +0000 UTC Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.796145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.796227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.796249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.796274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.796291 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.899688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.899755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.899779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.899806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:30 crc kubenswrapper[4793]: I0126 22:40:30.899825 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:30Z","lastTransitionTime":"2026-01-26T22:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.003930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.004074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.004096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.004130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.004153 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.107798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.107862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.107878 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.107903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.107921 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.210842 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.210876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.210887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.210906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.210919 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.314784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.314868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.314896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.314930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.314954 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.418874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.418943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.418962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.418989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.419011 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.522594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.522659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.522677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.522704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.522722 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.626755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.626825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.626843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.626871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.626889 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.729550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.729613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.729632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.729662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.729685 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.731110 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:23:28.873199057 +0000 UTC Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.760145 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.760224 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:31 crc kubenswrapper[4793]: E0126 22:40:31.760404 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.760506 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.760507 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:31 crc kubenswrapper[4793]: E0126 22:40:31.760703 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:31 crc kubenswrapper[4793]: E0126 22:40:31.760915 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:31 crc kubenswrapper[4793]: E0126 22:40:31.762729 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.834219 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.834291 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.834310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.834341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.834360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.937975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.938045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.938071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.938099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:31 crc kubenswrapper[4793]: I0126 22:40:31.938121 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:31Z","lastTransitionTime":"2026-01-26T22:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.041788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.041859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.041881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.041908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.041931 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.144595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.144653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.144672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.144697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.144715 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.248490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.248616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.248643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.248672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.248693 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.352352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.352418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.352436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.352462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.352480 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.455379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.455458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.455479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.455510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.455532 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.546715 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.546783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.546804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.546830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.546848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.568638 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:32Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.574997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.575075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.575101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.575134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.575153 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.598803 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:32Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.604543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.604946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.605056 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.605096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.605302 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.627412 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:32Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.633495 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.633583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.633614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.633649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.633676 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.654592 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:32Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.659823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.659890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.659915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.659946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.659969 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.683148 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:32Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:32 crc kubenswrapper[4793]: E0126 22:40:32.683411 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.685810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.685862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.685879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.685902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.685920 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.731908 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:38:30.333163516 +0000 UTC Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.788962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.789054 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.789074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.789103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.789124 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.893467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.893560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.893582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.893622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:32 crc kubenswrapper[4793]: I0126 22:40:32.893643 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:32Z","lastTransitionTime":"2026-01-26T22:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.001727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.001836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.001860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.001901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.001937 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.105788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.105861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.105917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.105947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.105968 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.209659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.209729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.209747 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.209777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.209796 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.314377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.314431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.314450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.314478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.314495 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.418741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.418809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.418828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.418853 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.418871 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.521676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.521741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.521759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.521798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.521817 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.624800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.624885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.624918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.624947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.624971 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.730780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.730859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.730887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.730917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.730936 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.732844 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:23:59.533844777 +0000 UTC Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.760781 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:33 crc kubenswrapper[4793]: E0126 22:40:33.761020 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.761155 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:33 crc kubenswrapper[4793]: E0126 22:40:33.761400 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.761565 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.761771 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:33 crc kubenswrapper[4793]: E0126 22:40:33.761955 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:33 crc kubenswrapper[4793]: E0126 22:40:33.762117 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.834278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.834332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.834348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.834373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.834390 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.938125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.938227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.938249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.938283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:33 crc kubenswrapper[4793]: I0126 22:40:33.938304 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:33Z","lastTransitionTime":"2026-01-26T22:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.041350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.041418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.041437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.041468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.041488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.145022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.145085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.145104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.145132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.145166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.248446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.248500 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.248517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.248543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.248564 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.352358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.352431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.352453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.352480 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.352500 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.456464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.456541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.456560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.456585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.456603 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.528707 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:34 crc kubenswrapper[4793]: E0126 22:40:34.528999 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:34 crc kubenswrapper[4793]: E0126 22:40:34.529114 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:42.52908402 +0000 UTC m=+57.517855572 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.559912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.559980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.560000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.560033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.560058 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.663422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.663485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.663506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.663531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.663549 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.733634 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:20:32.53901089 +0000 UTC Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.766841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.766931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.766953 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.767000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.767029 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.870413 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.870476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.870498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.870524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.870543 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.974295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.974395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.974419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.974450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:34 crc kubenswrapper[4793]: I0126 22:40:34.974478 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:34Z","lastTransitionTime":"2026-01-26T22:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.078144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.078251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.078269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.078300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.078322 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.182431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.182508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.182528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.182551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.182574 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.285321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.285385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.285404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.285430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.285448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.388226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.388298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.388331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.388363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.388385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.491995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.492051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.492069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.492096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.492116 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.595163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.595230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.595240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.595255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.595265 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.698849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.698944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.698967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.698993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.699012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.734792 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:05:17.300560299 +0000 UTC Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.760541 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.760554 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.760715 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:35 crc kubenswrapper[4793]: E0126 22:40:35.760919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:35 crc kubenswrapper[4793]: E0126 22:40:35.761103 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:35 crc kubenswrapper[4793]: E0126 22:40:35.761382 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.762753 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:35 crc kubenswrapper[4793]: E0126 22:40:35.763156 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.784105 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.801815 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.802049 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.802086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.802103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.802129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.802147 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.826533 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.849448 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.874892 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.895983 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.908559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.908604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.908617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.908638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.908653 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:35Z","lastTransitionTime":"2026-01-26T22:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.910805 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.927323 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.957880 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.972574 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:35 crc kubenswrapper[4793]: I0126 22:40:35.993784 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:35Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.013585 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.013963 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.014033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.014064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.014100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.014129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.034035 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.052682 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.074661 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.091776 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.120169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.120280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.120308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.120343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.120363 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.223257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.223333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.223354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.223383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.223408 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.305123 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.322287 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.326783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.326860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.326893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.326928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.326954 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.329923 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.349500 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.369307 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.405175 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431522 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.431490 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.452414 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.471000 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.491120 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.509033 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.531709 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.535279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.535343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.535361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.535389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.535409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.554998 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.575247 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.592896 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.611478 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.634534 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.638589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.638661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.638685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.638719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.638763 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.661064 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:36Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.735886 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:34:01.056733053 +0000 UTC Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.742675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.742736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.742755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.742784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.742801 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.846632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.846705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.846724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.846751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.846772 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.950575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.950645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.950675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.950706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:36 crc kubenswrapper[4793]: I0126 22:40:36.950730 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:36Z","lastTransitionTime":"2026-01-26T22:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.053969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.054038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.054058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.054085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.054118 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.158133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.158237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.158259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.158287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.158308 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.261438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.261505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.261525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.261554 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.261572 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.365631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.365726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.365748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.366108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.366134 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.462542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.462725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.462771 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.462837 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.462890 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463075 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463179 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:41:09.463149094 +0000 UTC m=+84.451920646 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463551 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:41:09.463532035 +0000 UTC m=+84.452303577 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463571 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463670 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463697 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463722 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463773 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463844 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.463874 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.464465 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:41:09.463729171 +0000 UTC m=+84.452500713 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.464533 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:41:09.464514702 +0000 UTC m=+84.453286244 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.464589 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:41:09.464548623 +0000 UTC m=+84.453320165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.472344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.472400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.472422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.472457 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.472479 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.582916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.582990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.583226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.583259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.583300 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.687777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.687848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.687869 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.687945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.687967 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.736119 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:29:28.062805736 +0000 UTC Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.760680 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.760882 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.761170 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.761135 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.761356 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.761622 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.761679 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:37 crc kubenswrapper[4793]: E0126 22:40:37.762423 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.762932 4793 scope.go:117] "RemoveContainer" containerID="5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.791475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.791548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.791567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.791599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.791621 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.895837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.896371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.896396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.896425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:37 crc kubenswrapper[4793]: I0126 22:40:37.896444 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:37Z","lastTransitionTime":"2026-01-26T22:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.000636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.000724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.000744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.000781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.000805 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.104075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.104153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.104180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.104244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.104270 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.207842 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.207912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.207931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.207962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.207985 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.311701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.311758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.311783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.311820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.311848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.415772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.415859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.415888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.415925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.415955 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.519069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.519135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.519161 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.519220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.519242 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.622328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.622397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.622415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.622448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.622470 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.725820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.725894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.725911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.725935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.725955 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.737034 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:09:44.863983883 +0000 UTC Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.829460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.829524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.829542 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.829569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.829587 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.932925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.932998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.933024 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.933077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:38 crc kubenswrapper[4793]: I0126 22:40:38.933105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:38Z","lastTransitionTime":"2026-01-26T22:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.036135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.036228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.036246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.036296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.036319 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.139585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.139649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.139670 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.139697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.139715 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.145607 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/1.log" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.149343 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.149827 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.165459 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.179161 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.202643 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.218536 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.230911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.243041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.243096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.243112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.243136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.243154 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.250425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.276263 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.296331 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.314067 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.333292 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.346091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.346147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.346165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.346206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.346224 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.357571 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.382825 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.411855 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.433805 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.449575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.449632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.449650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.449677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.449697 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.455535 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.472236 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.492796 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:39Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.554716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.554795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.554818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.554850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.554873 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.658230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.658316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.658338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.658373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.658402 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.737297 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:34:59.325383698 +0000 UTC Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.760044 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.760103 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.760113 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.760164 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:39 crc kubenswrapper[4793]: E0126 22:40:39.760342 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:39 crc kubenswrapper[4793]: E0126 22:40:39.760511 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:39 crc kubenswrapper[4793]: E0126 22:40:39.760641 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:39 crc kubenswrapper[4793]: E0126 22:40:39.760745 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.762125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.762237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.762264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.762293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.762321 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.865896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.865965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.865986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.866031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.866063 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.972593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.972693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.972722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.972756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:39 crc kubenswrapper[4793]: I0126 22:40:39.972774 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:39Z","lastTransitionTime":"2026-01-26T22:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.075830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.075907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.075931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.075962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.075982 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.159149 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/2.log" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.160860 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/1.log" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.167928 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" exitCode=1 Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.168017 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.168098 4793 scope.go:117] "RemoveContainer" containerID="5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.169577 4793 scope.go:117] "RemoveContainer" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" Jan 26 22:40:40 crc kubenswrapper[4793]: E0126 22:40:40.170117 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.178470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.178607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.178640 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.178681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.178709 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.193632 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.212820 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.231166 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.265601 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb4040012b7d9b4dee44921ce1b9839c0ab4b0ad0d8131aa8760c003ad278d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"message\\\":\\\"ost \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:23Z is after 2025-08-24T17:21:41Z]\\\\nI0126 22:40:23.922077 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922104 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0126 22:40:23.922122 6235 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0126 22:40:23.922135 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0126 22:40:23.922055 6235 services_controller.go:434] Service openshift-marketplace/redhat-operators retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{redhat-operators openshift-marketplace 8ef79441-cef6-4ba0-a073-a7b752dbbb3e 5667 0 2025-02-23 05:23:27 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators e1bbbbdb-a019-4415-8578-8f8fe53276e0 0xc0078d0b5d 0xc0078d0b5e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.282539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.282627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.282646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.282673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.282689 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.291864 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.314106 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.342031 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.368353 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.387034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.387103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.387130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.387234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.387266 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.388063 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.405705 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.430277 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.454278 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.471675 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.489392 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.490388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.490529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.490556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.490589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.490614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.505487 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.525688 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.546837 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:40Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.593764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.593831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.593852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.593881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.593901 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.697851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.697911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.697930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.697955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.697973 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.738497 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:54:07.29838848 +0000 UTC Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.800555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.800638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.800657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.800679 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.800695 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.903813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.904141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.904230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.904258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:40 crc kubenswrapper[4793]: I0126 22:40:40.904277 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:40Z","lastTransitionTime":"2026-01-26T22:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.008153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.008215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.008225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.008244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.008258 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.111477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.111556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.111582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.111621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.111644 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.177755 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/2.log" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.183808 4793 scope.go:117] "RemoveContainer" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" Jan 26 22:40:41 crc kubenswrapper[4793]: E0126 22:40:41.184112 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.215429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.215506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.215530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.215559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.215582 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.218517 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.244210 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.264382 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.286561 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.304389 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.320335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.320400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.320425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.320452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.320473 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.324968 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.344906 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.363667 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.386775 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.406265 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.423587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.423664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.423684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.423713 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.423734 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.428608 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.449703 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.471151 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.493335 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.512148 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.526848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.526929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.526952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.526979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.527001 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.532122 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.552889 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:41Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.629999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.630067 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.630086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.630115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.630135 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.732802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.732883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.732901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.732930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.732951 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.738880 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:13:30.091838166 +0000 UTC Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.760615 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.760644 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.760694 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.760633 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:41 crc kubenswrapper[4793]: E0126 22:40:41.760823 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:41 crc kubenswrapper[4793]: E0126 22:40:41.761014 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:41 crc kubenswrapper[4793]: E0126 22:40:41.761169 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:41 crc kubenswrapper[4793]: E0126 22:40:41.761295 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.836046 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.836120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.836142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.836171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.836229 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.939847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.939899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.939910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.939929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:41 crc kubenswrapper[4793]: I0126 22:40:41.939943 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:41Z","lastTransitionTime":"2026-01-26T22:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.043687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.043728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.043738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.043760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.043774 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.147543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.147611 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.147631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.147658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.147678 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.250936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.251114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.251140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.251230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.251256 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.354909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.354987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.355007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.355035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.355089 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.458437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.458515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.458537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.458568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.458588 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.562577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.562651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.562672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.562702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.562722 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.627558 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:42 crc kubenswrapper[4793]: E0126 22:40:42.627763 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:42 crc kubenswrapper[4793]: E0126 22:40:42.627877 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:40:58.627846742 +0000 UTC m=+73.616618294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.666418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.666473 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.666485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.666505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.666520 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.739748 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:08:23.351864576 +0000 UTC Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.769359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.769422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.769441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.769469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.769490 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.872926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.872992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.873010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.873035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.873054 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.923470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.923538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.923557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.923588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.923608 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: E0126 22:40:42.943657 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:42Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.949377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.949438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.949457 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.949482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.949501 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: E0126 22:40:42.970363 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:42Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.975590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.975651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.975671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.975698 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:42 crc kubenswrapper[4793]: I0126 22:40:42.975719 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:42Z","lastTransitionTime":"2026-01-26T22:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:42 crc kubenswrapper[4793]: E0126 22:40:42.998687 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:42Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.004880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.004963 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.004983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.005010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.005031 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.028242 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:43Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.036226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.036357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.036390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.036430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.036472 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.061811 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:43Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.062073 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.064418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.064506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.064534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.064569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.064595 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.168610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.168677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.168697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.168723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.168743 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.272344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.272415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.272432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.272464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.272500 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.375915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.375975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.375993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.376018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.376038 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.478956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.479014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.479033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.479062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.479082 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.582028 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.582087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.582104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.582145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.582164 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.685058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.685120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.685143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.685175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.685231 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.740151 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:30:53.02785541 +0000 UTC Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.760881 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.760942 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.761051 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.761174 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.761275 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.761352 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.761558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:43 crc kubenswrapper[4793]: E0126 22:40:43.761709 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.788370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.788423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.788444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.788471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.788493 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.892268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.892323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.892342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.892366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.892386 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.995968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.996015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.996026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.996045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:43 crc kubenswrapper[4793]: I0126 22:40:43.996061 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:43Z","lastTransitionTime":"2026-01-26T22:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.099222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.099280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.099300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.099324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.099344 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.202433 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.202490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.202508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.202533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.202551 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.305254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.305299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.305312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.305330 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.305343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.408367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.408436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.408462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.408493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.408518 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.511160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.511252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.511265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.511283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.511300 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.613416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.613496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.613522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.613555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.613575 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.716637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.716683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.716699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.716719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.716735 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.741111 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:33:27.985332247 +0000 UTC Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.819901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.819938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.819948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.819965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.819975 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.922830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.922896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.922919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.922947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:44 crc kubenswrapper[4793]: I0126 22:40:44.922969 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:44Z","lastTransitionTime":"2026-01-26T22:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.029496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.029578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.029602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.029634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.029658 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.132517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.132567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.132580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.132600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.132615 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.235006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.235042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.235054 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.235078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.235103 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.337872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.337911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.337924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.337942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.337954 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.441615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.441673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.441685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.441706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.441720 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.544803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.544878 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.544892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.544915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.544932 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.648306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.648359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.648372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.648392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.648406 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.742234 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:00:22.574457352 +0000 UTC Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.751758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.751803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.751821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.751846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.751904 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.759913 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.759946 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.759948 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.760003 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:45 crc kubenswrapper[4793]: E0126 22:40:45.760134 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:45 crc kubenswrapper[4793]: E0126 22:40:45.760265 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:45 crc kubenswrapper[4793]: E0126 22:40:45.760390 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:45 crc kubenswrapper[4793]: E0126 22:40:45.760475 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.781419 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.794269 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.804582 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.817828 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.840921 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.855952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.856455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.856694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.856918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.857125 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.857320 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.874788 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.900836 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.916860 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.934493 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.948833 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.961082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.961140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.961157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.961179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.961218 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:45Z","lastTransitionTime":"2026-01-26T22:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.963183 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:45 crc kubenswrapper[4793]: I0126 22:40:45.982257 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.001154 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:45Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.021497 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:46Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.039468 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:46Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.053592 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:46Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.064095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.064209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.064234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.064262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.064281 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.166824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.166884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.166904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.166931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.166951 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.270019 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.270080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.270099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.270121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.270139 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.374779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.375160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.375176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.375238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.375256 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.479494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.479557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.479570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.479591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.479604 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.582946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.582990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.583001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.583018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.583032 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.685918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.686009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.686041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.686160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.686324 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.743000 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:20:58.734382931 +0000 UTC Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.790436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.790494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.790507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.790533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.790549 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.895502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.895568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.895588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.895616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.895634 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.998553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.998613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.998625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.998643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:46 crc kubenswrapper[4793]: I0126 22:40:46.998659 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:46Z","lastTransitionTime":"2026-01-26T22:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.101553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.101630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.101653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.101687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.101710 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.204298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.204361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.204385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.204423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.204448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.307739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.307795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.307808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.307829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.307847 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.410978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.411030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.411043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.411066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.411078 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.514710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.514786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.514808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.514838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.514862 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.619020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.619078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.619092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.619114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.619128 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.722887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.722954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.722973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.723004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.723024 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.743219 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:36:57.106525458 +0000 UTC Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.760961 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:47 crc kubenswrapper[4793]: E0126 22:40:47.761134 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.761218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.761320 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:47 crc kubenswrapper[4793]: E0126 22:40:47.761371 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.760971 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:47 crc kubenswrapper[4793]: E0126 22:40:47.761538 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:47 crc kubenswrapper[4793]: E0126 22:40:47.761661 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.825962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.826018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.826043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.826071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.826410 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.930064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.930229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.930255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.930290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:47 crc kubenswrapper[4793]: I0126 22:40:47.930317 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:47Z","lastTransitionTime":"2026-01-26T22:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.033538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.033608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.033633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.033666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.033692 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.137741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.137827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.137855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.137891 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.137912 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.241010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.241077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.241099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.241133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.241159 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.345092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.345158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.345217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.345299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.345325 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.448759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.448825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.448838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.448863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.448880 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.552860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.552948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.552970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.553411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.553698 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.657709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.657784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.657802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.657831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.657848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.744250 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:11:23.706678216 +0000 UTC Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.761177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.761278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.761527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.761560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.761880 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.865898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.865944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.865958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.865978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.865994 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.968817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.968903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.968929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.968970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:48 crc kubenswrapper[4793]: I0126 22:40:48.968996 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:48Z","lastTransitionTime":"2026-01-26T22:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.072116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.072147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.072155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.072170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.072180 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.175635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.175675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.175685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.175702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.175715 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.279371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.279418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.279431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.279453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.279468 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.382180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.382244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.382256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.382273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.382306 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.485427 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.485501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.485522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.485551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.485576 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.588651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.588694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.588706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.588727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.588740 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.692458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.692530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.692550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.692584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.692604 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.744452 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:48:16.335227437 +0000 UTC Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.760752 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.760862 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:49 crc kubenswrapper[4793]: E0126 22:40:49.760948 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.761052 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:49 crc kubenswrapper[4793]: E0126 22:40:49.761299 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.761607 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:49 crc kubenswrapper[4793]: E0126 22:40:49.761901 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:49 crc kubenswrapper[4793]: E0126 22:40:49.761718 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.795012 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.795063 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.795077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.795098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.795114 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.899661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.899750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.899774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.899805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:49 crc kubenswrapper[4793]: I0126 22:40:49.899823 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:49Z","lastTransitionTime":"2026-01-26T22:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.004209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.004262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.004281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.004308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.004326 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.107805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.107878 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.107902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.107933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.107955 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.211006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.211078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.211096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.211153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.211174 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.314859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.314921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.314938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.314964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.314982 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.419144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.419257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.419278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.419339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.419366 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.522985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.523058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.523084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.523124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.523152 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.626781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.627267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.627427 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.627597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.627736 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.731016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.731088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.731114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.731144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.731163 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.745282 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:03:18.79862601 +0000 UTC Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.834356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.834392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.834403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.834420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.834431 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.938008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.938062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.938074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.938093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:50 crc kubenswrapper[4793]: I0126 22:40:50.938105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:50Z","lastTransitionTime":"2026-01-26T22:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.041987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.042068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.042093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.042127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.042154 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.145438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.145511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.145532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.145567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.145594 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.249009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.249066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.249078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.249099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.249112 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.351939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.352007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.352021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.352040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.352053 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.454498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.454563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.454587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.454617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.454638 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.557420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.557724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.557787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.557878 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.557942 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.660613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.660669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.660682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.660699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.660710 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.746079 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:50:29.519091645 +0000 UTC Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.760394 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.760418 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.760492 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.760613 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:51 crc kubenswrapper[4793]: E0126 22:40:51.760778 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:51 crc kubenswrapper[4793]: E0126 22:40:51.760930 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:51 crc kubenswrapper[4793]: E0126 22:40:51.761031 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:51 crc kubenswrapper[4793]: E0126 22:40:51.761136 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.762627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.762710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.762783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.762848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.762911 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.865741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.865804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.865828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.865853 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.865876 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.969128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.969592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.969795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.969963 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:51 crc kubenswrapper[4793]: I0126 22:40:51.970110 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:51Z","lastTransitionTime":"2026-01-26T22:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.073000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.073058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.073074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.073096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.073110 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.175570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.175647 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.175666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.175697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.175727 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.278361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.278392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.278401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.278421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.278435 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.395641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.395710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.395723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.395742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.395758 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.499133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.499236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.499261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.499289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.499310 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.603689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.603734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.603804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.603831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.603845 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.707344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.707399 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.707418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.707446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.707466 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.746766 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:54:54.1767389 +0000 UTC Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.761649 4793 scope.go:117] "RemoveContainer" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" Jan 26 22:40:52 crc kubenswrapper[4793]: E0126 22:40:52.762102 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.810344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.810388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.810408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.810436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.810457 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.912716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.912758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.912767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.912781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:52 crc kubenswrapper[4793]: I0126 22:40:52.912792 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:52Z","lastTransitionTime":"2026-01-26T22:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.015950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.016018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.016039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.016067 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.016087 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.120223 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.120284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.120307 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.120343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.120363 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.224074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.224114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.224126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.224144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.224156 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.327038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.327096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.327108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.327132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.327147 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.387702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.387804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.387824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.387855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.387875 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.402943 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:53Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.407627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.407666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.407685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.407707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.407723 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.420772 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:53Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.427308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.427361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.427383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.427410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.427430 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.467743 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:53Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.474109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.474173 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.474215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.474242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.474262 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.488954 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:53Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.494876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.494959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.494986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.495011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.495029 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.513062 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:53Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.513394 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.514975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.515036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.515052 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.515082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.515103 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.617979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.618014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.618022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.618036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.618047 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.720412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.720476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.720493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.720519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.720537 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.747707 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:39:50.884406587 +0000 UTC Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.760544 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.760590 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.760563 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.760749 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.760816 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.761013 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.761102 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:53 crc kubenswrapper[4793]: E0126 22:40:53.761278 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.823666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.823750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.823778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.823810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.823834 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.926639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.926704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.926730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.926794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:53 crc kubenswrapper[4793]: I0126 22:40:53.926822 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:53Z","lastTransitionTime":"2026-01-26T22:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.030606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.030678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.030703 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.030738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.030765 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.135031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.135087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.135110 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.135145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.135166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.238260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.238316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.238326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.238347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.238360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.341338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.341398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.341407 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.341421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.341433 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.445650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.445696 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.445710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.445729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.445742 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.549399 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.549470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.549489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.549518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.549538 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.653215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.653290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.653313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.653347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.653375 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.747906 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:44:06.049432709 +0000 UTC Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.756601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.756671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.756691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.756721 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.756741 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.860076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.860137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.860154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.860181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.860226 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.963606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.963664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.963681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.963710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:54 crc kubenswrapper[4793]: I0126 22:40:54.963728 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:54Z","lastTransitionTime":"2026-01-26T22:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.067182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.067280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.067302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.067337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.067358 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.170711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.170786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.170804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.170832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.170850 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.273668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.273739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.273766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.273800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.273822 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.376841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.376874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.376890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.376907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.376918 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.479609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.479692 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.479714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.479747 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.479768 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.582807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.582857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.582870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.582893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.582908 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.685739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.685772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.685782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.685801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.685812 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.749122 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 18:13:15.191226872 +0000 UTC Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.760637 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.760726 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.760823 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:55 crc kubenswrapper[4793]: E0126 22:40:55.760962 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.761131 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:55 crc kubenswrapper[4793]: E0126 22:40:55.761407 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:55 crc kubenswrapper[4793]: E0126 22:40:55.761646 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:55 crc kubenswrapper[4793]: E0126 22:40:55.761701 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.774645 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.784438 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.788570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.788615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.788632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.788652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.788666 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.796328 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.807720 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.819344 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.836137 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.848145 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.859205 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.868723 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.885766 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.891827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.891875 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.891889 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.891910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.891928 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.907167 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.922290 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.935098 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.945389 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.959486 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.973292 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.989505 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:40:55Z is after 2025-08-24T17:21:41Z" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.995315 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.995381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.995397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.995424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:55 crc kubenswrapper[4793]: I0126 22:40:55.995441 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:55Z","lastTransitionTime":"2026-01-26T22:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.097731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.097819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.097843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.097874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.097893 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.199707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.199818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.199840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.199871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.199914 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.303536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.303590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.303600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.303620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.303640 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.406444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.406503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.406529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.406562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.406590 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.509386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.509443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.509456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.509477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.509495 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.612281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.612333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.612345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.612364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.612378 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.715764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.715840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.715860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.715886 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.715903 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.749347 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:28:43.755005326 +0000 UTC Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.818538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.818570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.818581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.818597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.818609 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.921553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.921593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.921605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.921627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:56 crc kubenswrapper[4793]: I0126 22:40:56.921640 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:56Z","lastTransitionTime":"2026-01-26T22:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.024044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.024093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.024106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.024132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.024143 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.126648 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.126730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.126751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.126784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.126816 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.230041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.230102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.230126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.230150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.230164 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.333727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.333864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.333879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.333904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.333920 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.436117 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.436255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.436286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.436317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.436339 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.538723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.538818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.538838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.538867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.538886 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.642455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.642502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.642513 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.642530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.642541 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.745405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.745460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.745479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.745505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.745524 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.749661 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:42:02.901341193 +0000 UTC Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.761420 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:57 crc kubenswrapper[4793]: E0126 22:40:57.761561 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.761775 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:57 crc kubenswrapper[4793]: E0126 22:40:57.761858 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.762038 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:57 crc kubenswrapper[4793]: E0126 22:40:57.762127 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.762486 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:57 crc kubenswrapper[4793]: E0126 22:40:57.762558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.848509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.848562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.848573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.848592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.848607 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.950966 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.951031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.951049 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.951077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:57 crc kubenswrapper[4793]: I0126 22:40:57.951099 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:57Z","lastTransitionTime":"2026-01-26T22:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.054048 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.054095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.054108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.054128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.054145 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.156742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.156816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.156838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.156862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.156879 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.259344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.259415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.259433 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.259459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.259477 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.361589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.361661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.361689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.361720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.361743 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.463809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.463862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.463873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.463897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.463912 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.566227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.566267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.566281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.566298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.566311 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.658541 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:58 crc kubenswrapper[4793]: E0126 22:40:58.658778 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:58 crc kubenswrapper[4793]: E0126 22:40:58.658879 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:41:30.658855111 +0000 UTC m=+105.647626623 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.668589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.668642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.668653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.668672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.668687 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.750281 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:35:44.40122025 +0000 UTC Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.771744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.771803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.771816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.771840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.771852 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.874527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.874584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.874596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.874613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.874626 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.977649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.977720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.977744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.977777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:58 crc kubenswrapper[4793]: I0126 22:40:58.977801 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:58Z","lastTransitionTime":"2026-01-26T22:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.080383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.080429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.080441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.080457 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.080470 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.183145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.183231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.183249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.183274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.183292 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.285537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.285609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.285628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.285654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.285672 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.388602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.388676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.388698 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.388727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.388752 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.490917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.490950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.490958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.490974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.490983 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.593501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.593561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.593580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.593639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.593661 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.697016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.697065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.697079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.697101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.697116 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.751011 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 20:42:05.606659441 +0000 UTC Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.760777 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.760828 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:40:59 crc kubenswrapper[4793]: E0126 22:40:59.760969 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.760981 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:40:59 crc kubenswrapper[4793]: E0126 22:40:59.761162 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:40:59 crc kubenswrapper[4793]: E0126 22:40:59.761388 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.761563 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:40:59 crc kubenswrapper[4793]: E0126 22:40:59.761663 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.800828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.801334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.801818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.802412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.802741 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.906686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.907007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.907172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.907345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:40:59 crc kubenswrapper[4793]: I0126 22:40:59.907485 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:40:59Z","lastTransitionTime":"2026-01-26T22:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.010517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.010586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.010607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.010636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.010657 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.114369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.114432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.114453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.114482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.114503 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.217499 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.217561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.217576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.217595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.217610 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.320671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.320740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.320758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.320785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.320803 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.424278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.424367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.424390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.424421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.424442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.527112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.527175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.527217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.527248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.527267 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.629844 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.629911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.629924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.629951 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.629965 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.733328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.734250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.734561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.734723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.734858 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.751950 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:01:56.593402115 +0000 UTC Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.837430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.837475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.837487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.837508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.837522 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.941050 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.941120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.941144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.941173 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:00 crc kubenswrapper[4793]: I0126 22:41:00.941228 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:00Z","lastTransitionTime":"2026-01-26T22:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.044450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.044800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.044927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.045109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.045364 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.147895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.148328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.148546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.148746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.149358 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.252424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.252473 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.252489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.252512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.252533 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.357052 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.357416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.357581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.357790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.357956 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.461290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.461349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.461366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.461393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.461409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.563884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.564020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.564068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.564140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.564170 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.667571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.667636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.667659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.667690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.667714 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.753304 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:56:26.72991922 +0000 UTC Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.760830 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.760860 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:01 crc kubenswrapper[4793]: E0126 22:41:01.761011 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.761083 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.761137 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:01 crc kubenswrapper[4793]: E0126 22:41:01.762499 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:01 crc kubenswrapper[4793]: E0126 22:41:01.762338 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:01 crc kubenswrapper[4793]: E0126 22:41:01.762647 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.769535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.769591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.769635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.769664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.769686 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.778385 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.872980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.873045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.873064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.873088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.873107 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.975972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.976041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.976059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.976090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:01 crc kubenswrapper[4793]: I0126 22:41:01.976114 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:01Z","lastTransitionTime":"2026-01-26T22:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.080373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.080457 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.080481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.080511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.080530 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.184523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.184608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.184630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.184660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.184683 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.287685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.287754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.287773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.287798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.287819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.391623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.391690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.391709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.391734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.391752 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.495387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.495496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.495522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.495552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.495574 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.600881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.600953 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.600972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.601001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.601025 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.704995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.705065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.705090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.705124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.705150 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.754010 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:06:52.398526007 +0000 UTC Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.808107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.808169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.808186 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.808235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.808254 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.911819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.911890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.911912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.911938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:02 crc kubenswrapper[4793]: I0126 22:41:02.911956 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:02Z","lastTransitionTime":"2026-01-26T22:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.014454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.014517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.014537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.014562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.014580 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.117531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.117606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.117629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.117666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.117699 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.221128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.221209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.221228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.221251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.221269 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.261302 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/0.log" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.261385 4793 generic.go:334] "Generic (PLEG): container finished" podID="2e6daa0d-7641-46e1-b9ab-8479c1cd00d6" containerID="a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08" exitCode=1 Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.261455 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerDied","Data":"a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.262321 4793 scope.go:117] "RemoveContainer" containerID="a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.287553 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.310343 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.324081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.324150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.324170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.324223 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.324244 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.332529 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.349724 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.384054 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.409062 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.426435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.426481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.426495 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.426515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.426532 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.432416 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.451144 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.469174 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.491136 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.510705 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.529678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.529743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.529766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.529799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.529823 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.530640 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.551506 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.570383 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.592296 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.610077 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.623099 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.634643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.634732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.634759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.634794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.634819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.646824 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.737864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.737907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.737916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.737933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.737942 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.754212 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:03:47.342537022 +0000 UTC Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.760624 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.760692 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.760771 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.760792 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.760891 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.760994 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.761735 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.762268 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.840649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.840715 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.840735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.840762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.840782 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.894120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.894180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.894229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.894257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.894282 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.916084 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.921667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.921736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.921763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.921788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.921806 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.941115 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.947544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.947617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.947642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.947675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.947773 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.965151 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.972229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.972292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.972354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.972387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.972411 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:03 crc kubenswrapper[4793]: E0126 22:41:03.993764 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:03Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.999463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.999527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.999547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.999581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:03 crc kubenswrapper[4793]: I0126 22:41:03.999607 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:03Z","lastTransitionTime":"2026-01-26T22:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: E0126 22:41:04.022573 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4aaaf2a3-8422-4886-9dc8-d9142aad48d5\\\",\\\"systemUUID\\\":\\\"601eed9e-4791-49d9-902a-c6f8f21a8d0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: E0126 22:41:04.022808 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.025521 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.025592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.025617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.025648 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.025674 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.129226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.129285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.129305 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.129331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.129349 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.232394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.232492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.232518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.232553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.232577 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.267469 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/0.log" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.267530 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerStarted","Data":"2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.283363 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.293955 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.306570 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.319467 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.331162 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.335040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.335077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.335092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.335113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.335126 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.341613 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.353023 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.362564 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.374483 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.385710 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.394846 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.407822 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.424237 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.437342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.437373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.437382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.437397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.437407 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.441270 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.456138 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.469990 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.492878 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.514508 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:04Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.541277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.541319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.541332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.541352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.541367 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.644563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.644656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.644676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.644731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.644751 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.747035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.747084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.747100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.747126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.747143 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.754418 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:32:44.245373249 +0000 UTC Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.762046 4793 scope.go:117] "RemoveContainer" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.850586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.850641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.850658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.850685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.850702 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.953700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.953754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.953772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.953799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:04 crc kubenswrapper[4793]: I0126 22:41:04.953823 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:04Z","lastTransitionTime":"2026-01-26T22:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.056787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.056821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.056830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.056846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.056858 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.159494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.159548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.159561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.159583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.159602 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.262730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.262783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.262803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.262828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.262849 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.272735 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/2.log" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.275368 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.275783 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.288911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.300409 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.309612 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.323819 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.340615 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.355549 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.366381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.366452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.366469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.366493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.366512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.370454 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.386223 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.408161 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.424052 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.437365 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.449373 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.463243 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.469994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.470038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.470054 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.470081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.470116 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.477934 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.490801 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.502909 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.515820 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.526635 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.573238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.573317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.573331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.573351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.573364 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.676661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.676705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.676714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.676732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.676743 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.755528 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:35:39.101093127 +0000 UTC Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.760838 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.760944 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.761200 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.761261 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:05 crc kubenswrapper[4793]: E0126 22:41:05.761331 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:05 crc kubenswrapper[4793]: E0126 22:41:05.761453 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:05 crc kubenswrapper[4793]: E0126 22:41:05.761230 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:05 crc kubenswrapper[4793]: E0126 22:41:05.761591 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.775113 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.783610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.783669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.783681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.785323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.785347 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.792631 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.805438 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.825469 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.845532 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.860341 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.872001 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887280 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.887300 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.900537 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.916092 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.935626 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.957684 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.972777 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.984885 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.989943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.989985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.990002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.990022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.990036 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:05Z","lastTransitionTime":"2026-01-26T22:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:05 crc kubenswrapper[4793]: I0126 22:41:05.999814 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:05Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.011145 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.027673 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.044365 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.093010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.093065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.093082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.093103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.093117 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.195801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.195860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.195873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.195893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.195907 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.281147 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/3.log" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.282171 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/2.log" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.285515 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" exitCode=1 Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.285571 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.285628 4793 scope.go:117] "RemoveContainer" containerID="f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.286513 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:06 crc kubenswrapper[4793]: E0126 22:41:06.286767 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.298530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.298580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.298597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.298619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.298635 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.304598 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.329080 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.344605 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.365408 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.391323 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3da1d77b2aa124b4de08934f6d18fb327f3d97ecb73be008f1fa4c59873ee4d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:40:39Z\\\",\\\"message\\\":\\\":(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 22:40:39.089571 6447 services_controller.go:356] Processing sync for service openshift-console/downloads for network=default\\\\nI0126 22:40:39.089565 6447 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-7rl9w before timer (time: 2026-01-26 22:40:40.05437048 +0000 UTC m=+1.651757425): skip\\\\nI0126 22:40:39.089584 6447 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 49.751µs)\\\\nI0126 22:40:39.089666 6447 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 22:40:39.089724 6447 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:40:39.089753 6447 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:40:39.089754 6447 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 22:40:39.094644 6447 factory.go:656] Stopping watch factory\\\\nI0126 22:40:39.094689 6447 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:40:39.094730 6447 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 22:40:39.094846 6447 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:05Z\\\",\\\"message\\\":\\\" 6842 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 22:41:05.732808 6842 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:41:05.732874 6842 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 22:41:05.732940 6842 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 22:41:05.733009 6842 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 22:41:05.733095 6842 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 22:41:05.733208 6842 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 22:41:05.733305 6842 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 22:41:05.733430 6842 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 22:41:05.733567 6842 factory.go:656] Stopping watch factory\\\\nI0126 22:41:05.733642 6842 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:41:05.733026 6842 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:41:05.733162 6842 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 22:41:05.733274 6842 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 22:41:05.733521 6842 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 22:41:05.733937 6842 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:41:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.401239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.401310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.401335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.401367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.401391 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.414473 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.434533 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.450585 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.470246 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.486260 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.501867 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.505184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.505272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.505293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.505334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.505362 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.518831 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.535338 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.550839 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.570237 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.589825 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.614599 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.623852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.623913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.623934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.623964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.623986 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.642223 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:06Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.728222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.728302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.728322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.728349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.728369 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.755999 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:32:19.230229567 +0000 UTC Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.832182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.832297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.832317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.832346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.832371 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.936386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.936436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.936453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.936477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:06 crc kubenswrapper[4793]: I0126 22:41:06.936496 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:06Z","lastTransitionTime":"2026-01-26T22:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.039985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.040062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.040079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.040108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.040148 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.142689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.143248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.143268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.143295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.143316 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.246156 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.246298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.246322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.246347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.246364 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.291617 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/3.log" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.296752 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:07 crc kubenswrapper[4793]: E0126 22:41:07.296968 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.314755 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7617a467-2e7f-408e-b4d8-70624f991d83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3dd342b8a6dcc8374e6310fbe8b8ac499ea042962ef9442f2a4a143650fbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e568af9ef433beb4ee8b8b8e4a180e94c5ba42735c456b5e8355c37736b9f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.334320 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.350217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.350313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.350333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.350358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.350377 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.359914 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee5cb0e57fa873622c40c24e72f24cbd44dd4c39642418c9edacc00262af6b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d5122ed6ca168553eaef16053bbc01436c5123ad8be68fbebcf1b3ed00e10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.379423 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2178ad7-9b5f-4304-810f-f35026a9a27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263c844f23f185fd2917632f72c7757522433b05d5de06c5515e742f7da36944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddad02b568468ce37b7c10fc5b00f66af1da4517b931e06fea0f077bbd7d61bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m59gn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wb7lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.396999 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9d8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7rl9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.417062 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65af0fd8-004c-4fdc-97f5-05d8bf6c8127\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T22:39:59Z\\\",\\\"message\\\":\\\"W0126 22:39:49.103436 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 22:39:49.103798 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769467189 cert, and key in /tmp/serving-cert-876510412/serving-signer.crt, /tmp/serving-cert-876510412/serving-signer.key\\\\nI0126 22:39:49.491002 1 observer_polling.go:159] Starting file observer\\\\nW0126 22:39:49.494867 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 22:39:49.495300 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 22:39:49.498353 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-876510412/tls.crt::/tmp/serving-cert-876510412/tls.key\\\\\\\"\\\\nF0126 22:39:59.888772 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.431884 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.444496 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjhd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24361290-ee1e-4424-b0f0-27d0b8f013ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dddf9ce46755ca8ba090f2e2b906967f968b85e937d148ee040c164794a8e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgrbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjhd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.459077 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576cd1b-4786-4a18-b570-5f961f464036\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5197efcde6d2061f530a68ea7c0c99ec4446554b84a1811a7d970a43797ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2a1df857107cb1b07fd3524ba4d508bb2694a49e2de3a96c9938ec4bbdecef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03472d948157db27ba9a9cf1410100a91b86b0e07784e05cd870871099ad333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cfee73da6031855a1329bc3b72b5315d6b7e4944abdb4e9b25152ccd7d58018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:39:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472470 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68b05073-b99f-4026-986a-0b8a0f7be18a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05dd31faa78c21e0f4d8689b6934e8883819159ede9fe2e9e55a5458f323a77b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrxc8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.472780 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.491417 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l5qgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:41:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:02Z\\\",\\\"message\\\":\\\"2026-01-26T22:40:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897\\\\n2026-01-26T22:40:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae640c77-31a8-4d19-b4fe-df5a37430897 to /host/opt/cni/bin/\\\\n2026-01-26T22:40:17Z [verbose] multus-daemon started\\\\n2026-01-26T22:40:17Z [verbose] Readiness Indicator file check\\\\n2026-01-26T22:41:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4df2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l5qgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.505663 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeea459b-f3c0-4460-ab0a-a8702ef49ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:39:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08aa8a564aa0bb28ea321a246d7cf9e5aba907b329486d468726bac1633c6a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7c5264db2e09e1642de1bfe4dd26aa54c30395cf15be173509db55b950482c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219169156dde8ada8d13936f89606293e35719a76cb1c2cfc297c07551df62fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:39:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.520366 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8b2fbb3dbab6bca870a6a482ddcd0bee89cfedb2e9e4c50681d4b9804956c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.534235 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efebf6cc1fda6b2261ac71a37ef9d0cc0924e49aa2d62b63b027d32804d7f9ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.545284 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a78b43-c8a5-48e0-8fe3-89bc7b449391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d2b5e6972cd21a4eaf2670a9991df470aa8add5170ff1f732ad26a1f0d22353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ml9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5htjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.564233 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T22:41:05Z\\\",\\\"message\\\":\\\" 6842 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 22:41:05.732808 6842 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 22:41:05.732874 6842 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 22:41:05.732940 6842 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 22:41:05.733009 6842 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 22:41:05.733095 6842 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 22:41:05.733208 6842 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 22:41:05.733305 6842 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 22:41:05.733430 6842 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 22:41:05.733567 6842 factory.go:656] Stopping watch factory\\\\nI0126 22:41:05.733642 6842 ovnkube.go:599] Stopped ovnkube\\\\nI0126 22:41:05.733026 6842 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 22:41:05.733162 6842 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 22:41:05.733274 6842 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 22:41:05.733521 6842 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 22:41:05.733937 6842 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T22:41:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97spd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwtbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.576638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.576671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.576683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.576700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.576711 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.581579 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a14c1f-430a-4e5b-bd6a-01a959edbab1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1d01ef724ab4440bcff19d38a319a23cda0a02fc33b0a49b7a25d7784f91bc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T22:40:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e235253beed4710b58f91158f0d62a6aa11ace8bc494f22eb08c516efc2897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44210a35e5d631da01b231b4ac3f5457ca13e1e1e00c214a11ca9d612e637847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9f80742b705010f5c78a85fb3e4945dde2470d320167f94cbf8f64f3a3058f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dec4183de83c1d4147e1d0bbfc7728e2bad9fcffb3b9ffa4c750ae1df1b3644\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185db63b58ad8f201ff385d9add765d2640411fb5fb5d3590f027c24abad2706\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0e68c91f1fcdeb0e76263ed3672ad35129e241ef498e0b79c1b68fcc8d518c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T22:40:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T22:40:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxzr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T22:40:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ptkmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.599313 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T22:40:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T22:41:07Z is after 2025-08-24T17:21:41Z" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.679691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.679757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.679771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.679789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.679801 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.756935 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:51:25.382563946 +0000 UTC Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.760314 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.760315 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.760384 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.760434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:07 crc kubenswrapper[4793]: E0126 22:41:07.760520 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:07 crc kubenswrapper[4793]: E0126 22:41:07.760646 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:07 crc kubenswrapper[4793]: E0126 22:41:07.760717 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:07 crc kubenswrapper[4793]: E0126 22:41:07.760776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.787026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.787112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.787135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.787164 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.787230 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.890571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.890688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.890713 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.890744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.890769 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.993325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.993409 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.993435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.993466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:07 crc kubenswrapper[4793]: I0126 22:41:07.993491 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:07Z","lastTransitionTime":"2026-01-26T22:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.095997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.096058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.096078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.096103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.096121 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.199242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.199317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.199340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.199365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.199385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.301800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.301856 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.301872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.301898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.301922 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.404050 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.404075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.404084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.404096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.404105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.506297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.506329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.506338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.506351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.506360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.609351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.609393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.609410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.609433 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.609450 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.712311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.712342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.712350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.712365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.712373 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.758025 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:20:16.09684163 +0000 UTC Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.815296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.815370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.815390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.815416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.815442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.918266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.918328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.918348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.918372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:08 crc kubenswrapper[4793]: I0126 22:41:08.918391 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:08Z","lastTransitionTime":"2026-01-26T22:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.021514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.021582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.021600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.021625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.021645 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.124849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.124909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.124922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.124944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.124959 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.227281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.227328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.227368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.227388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.227399 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.329706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.329745 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.329756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.329772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.329784 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.433399 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.433461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.433478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.433506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.433525 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.484499 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.484638 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.484683 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.484789 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:13.484717754 +0000 UTC m=+148.473489296 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.484799 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.484887 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:42:13.484864168 +0000 UTC m=+148.473635710 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.484932 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.484943 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.484977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.484983 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485050 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485089 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485122 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485147 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485097 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 22:42:13.485084064 +0000 UTC m=+148.473855616 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485266 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485288 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 22:42:13.485263829 +0000 UTC m=+148.474035381 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.485579 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 22:42:13.485389102 +0000 UTC m=+148.474160644 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.536258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.536317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.536340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.536364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.536382 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.639300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.639343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.639359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.639380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.639392 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.742133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.742228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.742247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.742273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.742293 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.758579 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:24:24.655694967 +0000 UTC Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.759947 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.759971 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.760034 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.760077 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.760094 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.760300 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.760352 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:09 crc kubenswrapper[4793]: E0126 22:41:09.760477 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.844690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.844947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.845096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.845171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.845255 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.949871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.949927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.949946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.949975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:09 crc kubenswrapper[4793]: I0126 22:41:09.949994 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:09Z","lastTransitionTime":"2026-01-26T22:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.054385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.054429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.054439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.054456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.054467 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.159255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.159347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.159367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.159425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.159448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.263136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.263242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.263262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.263290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.263309 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.373125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.373218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.373439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.373477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.373497 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.477699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.477777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.477797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.477825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.477862 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.581084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.581155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.581174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.581230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.581249 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.685229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.685293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.685312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.685340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.685359 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.759280 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:05:00.908410418 +0000 UTC Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.785044 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.788425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.788502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.788529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.788564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.788595 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.891758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.892298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.892465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.892619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.892763 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.996801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.997720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.997881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.998035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:10 crc kubenswrapper[4793]: I0126 22:41:10.998165 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:10Z","lastTransitionTime":"2026-01-26T22:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.101622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.101677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.101695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.101719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.101739 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.204632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.205061 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.205286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.205461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.205610 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.308269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.308304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.308313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.308346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.308357 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.411179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.411293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.411314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.411344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.411365 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.514286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.514358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.514376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.514406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.514427 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.618316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.618383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.618407 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.618442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.618466 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.723137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.723266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.723310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.723339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.723355 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.760159 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:05:55.384045507 +0000 UTC Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.760432 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.760529 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:11 crc kubenswrapper[4793]: E0126 22:41:11.760599 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.760646 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:11 crc kubenswrapper[4793]: E0126 22:41:11.760876 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.760920 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:11 crc kubenswrapper[4793]: E0126 22:41:11.761012 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:11 crc kubenswrapper[4793]: E0126 22:41:11.761087 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.826709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.826780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.826802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.826826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.826843 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.930283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.930375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.930440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.930481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:11 crc kubenswrapper[4793]: I0126 22:41:11.930498 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:11Z","lastTransitionTime":"2026-01-26T22:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.034095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.034159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.034178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.034237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.034258 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.136963 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.137044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.137075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.137109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.137136 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.241070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.241139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.241158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.241183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.241228 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.344705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.344779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.344799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.344830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.344851 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.448090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.448158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.448182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.448250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.448267 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.551763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.551849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.551870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.551901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.551931 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.655690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.655761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.655780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.655808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.655831 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.759564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.759638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.759657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.759689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.759714 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.760549 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:32:21.430864379 +0000 UTC Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.863533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.863607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.863633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.863668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.863697 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.967516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.967576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.967593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.967620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:12 crc kubenswrapper[4793]: I0126 22:41:12.967640 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:12Z","lastTransitionTime":"2026-01-26T22:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.070355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.070431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.070454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.070482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.070502 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.174463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.174516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.174532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.174556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.174571 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.278310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.278395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.278413 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.278446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.278473 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.381463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.381534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.381555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.381582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.381632 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.484689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.484770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.484798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.484834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.484859 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.587888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.587950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.587971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.587998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.588019 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.691438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.691512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.691532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.691561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.691581 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.759945 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.759992 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:13 crc kubenswrapper[4793]: E0126 22:41:13.760129 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.760218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.760277 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:13 crc kubenswrapper[4793]: E0126 22:41:13.760441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:13 crc kubenswrapper[4793]: E0126 22:41:13.760564 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:13 crc kubenswrapper[4793]: E0126 22:41:13.760676 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.760881 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:24:51.983854333 +0000 UTC Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.794300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.794356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.794374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.794400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.794419 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.897738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.897807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.897832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.897901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:13 crc kubenswrapper[4793]: I0126 22:41:13.897931 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:13Z","lastTransitionTime":"2026-01-26T22:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.001559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.002276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.002665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.002981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.003318 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:14Z","lastTransitionTime":"2026-01-26T22:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.106583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.106638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.106656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.106681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.106699 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:14Z","lastTransitionTime":"2026-01-26T22:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.209741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.209820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.209837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.209864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.209885 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:14Z","lastTransitionTime":"2026-01-26T22:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.246629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.246672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.246689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.246714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.246733 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T22:41:14Z","lastTransitionTime":"2026-01-26T22:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.283092 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk"] Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.283777 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.285936 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.287354 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.289694 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.289850 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.327562 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.327534678 podStartE2EDuration="4.327534678s" podCreationTimestamp="2026-01-26 22:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.327342773 +0000 UTC m=+89.316114285" watchObservedRunningTime="2026-01-26 22:41:14.327534678 +0000 UTC m=+89.316306200" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.365638 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cjhd7" podStartSLOduration=62.36560229 podStartE2EDuration="1m2.36560229s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.363600546 +0000 UTC m=+89.352372098" watchObservedRunningTime="2026-01-26 22:41:14.36560229 +0000 UTC m=+89.354373812" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.397816 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.397795542 podStartE2EDuration="1m9.397795542s" podCreationTimestamp="2026-01-26 22:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.396746144 +0000 UTC m=+89.385517656" watchObservedRunningTime="2026-01-26 22:41:14.397795542 +0000 UTC m=+89.386567054" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.411162 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.411123434 podStartE2EDuration="38.411123434s" podCreationTimestamp="2026-01-26 22:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.410050314 +0000 UTC m=+89.398821836" watchObservedRunningTime="2026-01-26 22:41:14.411123434 +0000 UTC m=+89.399894996" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.424458 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jwwcw" podStartSLOduration=62.424427744 podStartE2EDuration="1m2.424427744s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.424345882 +0000 UTC m=+89.413117394" watchObservedRunningTime="2026-01-26 22:41:14.424427744 +0000 UTC m=+89.413199276" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.443399 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-l5qgq" podStartSLOduration=62.443362427 podStartE2EDuration="1m2.443362427s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.442657708 +0000 UTC m=+89.431429240" watchObservedRunningTime="2026-01-26 22:41:14.443362427 +0000 UTC m=+89.432133979" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.444996 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.445052 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.445073 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.445113 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.445256 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.527264 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podStartSLOduration=62.52723858 podStartE2EDuration="1m2.52723858s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.502596523 +0000 UTC m=+89.491368035" watchObservedRunningTime="2026-01-26 22:41:14.52723858 +0000 UTC m=+89.516010102" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.545921 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.545976 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546031 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546079 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546104 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546276 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546445 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.546892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.555414 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ptkmd" podStartSLOduration=62.555386063 podStartE2EDuration="1m2.555386063s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.554604962 +0000 UTC m=+89.543376524" watchObservedRunningTime="2026-01-26 22:41:14.555386063 +0000 UTC m=+89.544157595" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.561753 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.573442 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.573419732 podStartE2EDuration="1m9.573419732s" podCreationTimestamp="2026-01-26 22:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.573077363 +0000 UTC m=+89.561848885" watchObservedRunningTime="2026-01-26 22:41:14.573419732 +0000 UTC m=+89.562191254" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.578117 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rkvtk\" (UID: \"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.590128 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.590099984 podStartE2EDuration="13.590099984s" podCreationTimestamp="2026-01-26 22:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.589274612 +0000 UTC m=+89.578046134" watchObservedRunningTime="2026-01-26 22:41:14.590099984 +0000 UTC m=+89.578871506" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.604870 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.657976 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wb7lh" podStartSLOduration=62.657945833 podStartE2EDuration="1m2.657945833s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:14.657624914 +0000 UTC m=+89.646396436" watchObservedRunningTime="2026-01-26 22:41:14.657945833 +0000 UTC m=+89.646717385" Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.761354 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:12:01.175684834 +0000 UTC Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.761794 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 22:41:14 crc kubenswrapper[4793]: I0126 22:41:14.771438 4793 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.326278 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" event={"ID":"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2","Type":"ContainerStarted","Data":"d65fc65845d91cee768e8e6450b6561d5d0b511d16ad27a6a9f71f69756281d5"} Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.326374 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" event={"ID":"ce7ec5cc-4d3c-4cd1-b32c-3f33143504e2","Type":"ContainerStarted","Data":"8eabc53d55bdd22b3981705629ad2b1a8d86f0ef8864e932f2dddc83049ecb2f"} Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.349817 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rkvtk" podStartSLOduration=63.349792452 podStartE2EDuration="1m3.349792452s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:15.347992904 +0000 UTC m=+90.336764466" watchObservedRunningTime="2026-01-26 22:41:15.349792452 +0000 UTC m=+90.338563994" Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.760468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.760525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.760573 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:15 crc kubenswrapper[4793]: E0126 22:41:15.763226 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:15 crc kubenswrapper[4793]: I0126 22:41:15.763261 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:15 crc kubenswrapper[4793]: E0126 22:41:15.763395 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:15 crc kubenswrapper[4793]: E0126 22:41:15.763477 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:15 crc kubenswrapper[4793]: E0126 22:41:15.763622 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:17 crc kubenswrapper[4793]: I0126 22:41:17.760592 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:17 crc kubenswrapper[4793]: I0126 22:41:17.760729 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:17 crc kubenswrapper[4793]: I0126 22:41:17.760975 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:17 crc kubenswrapper[4793]: E0126 22:41:17.761269 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:17 crc kubenswrapper[4793]: I0126 22:41:17.761323 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:17 crc kubenswrapper[4793]: E0126 22:41:17.761586 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:17 crc kubenswrapper[4793]: E0126 22:41:17.761679 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:17 crc kubenswrapper[4793]: E0126 22:41:17.761809 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:18 crc kubenswrapper[4793]: I0126 22:41:18.761851 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:18 crc kubenswrapper[4793]: E0126 22:41:18.762165 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:41:19 crc kubenswrapper[4793]: I0126 22:41:19.760248 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:19 crc kubenswrapper[4793]: I0126 22:41:19.760406 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:19 crc kubenswrapper[4793]: I0126 22:41:19.760409 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:19 crc kubenswrapper[4793]: I0126 22:41:19.760590 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:19 crc kubenswrapper[4793]: E0126 22:41:19.760571 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:19 crc kubenswrapper[4793]: E0126 22:41:19.760872 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:19 crc kubenswrapper[4793]: E0126 22:41:19.761324 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:19 crc kubenswrapper[4793]: E0126 22:41:19.761472 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:21 crc kubenswrapper[4793]: I0126 22:41:21.760500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:21 crc kubenswrapper[4793]: I0126 22:41:21.760549 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:21 crc kubenswrapper[4793]: I0126 22:41:21.760560 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:21 crc kubenswrapper[4793]: I0126 22:41:21.760518 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:21 crc kubenswrapper[4793]: E0126 22:41:21.760662 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:21 crc kubenswrapper[4793]: E0126 22:41:21.760831 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:21 crc kubenswrapper[4793]: E0126 22:41:21.760949 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:21 crc kubenswrapper[4793]: E0126 22:41:21.761326 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:23 crc kubenswrapper[4793]: I0126 22:41:23.760540 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:23 crc kubenswrapper[4793]: I0126 22:41:23.760645 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:23 crc kubenswrapper[4793]: E0126 22:41:23.761093 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:23 crc kubenswrapper[4793]: I0126 22:41:23.760666 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:23 crc kubenswrapper[4793]: E0126 22:41:23.761334 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:23 crc kubenswrapper[4793]: I0126 22:41:23.760645 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:23 crc kubenswrapper[4793]: E0126 22:41:23.761506 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:23 crc kubenswrapper[4793]: E0126 22:41:23.761604 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:25 crc kubenswrapper[4793]: I0126 22:41:25.760317 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:25 crc kubenswrapper[4793]: I0126 22:41:25.760362 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:25 crc kubenswrapper[4793]: I0126 22:41:25.760397 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:25 crc kubenswrapper[4793]: I0126 22:41:25.760338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:25 crc kubenswrapper[4793]: E0126 22:41:25.760533 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:25 crc kubenswrapper[4793]: E0126 22:41:25.762624 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:25 crc kubenswrapper[4793]: E0126 22:41:25.762706 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:25 crc kubenswrapper[4793]: E0126 22:41:25.763168 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:27 crc kubenswrapper[4793]: I0126 22:41:27.760280 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:27 crc kubenswrapper[4793]: I0126 22:41:27.761503 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:27 crc kubenswrapper[4793]: I0126 22:41:27.761745 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:27 crc kubenswrapper[4793]: E0126 22:41:27.761774 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:27 crc kubenswrapper[4793]: I0126 22:41:27.761918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:27 crc kubenswrapper[4793]: E0126 22:41:27.762179 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:27 crc kubenswrapper[4793]: E0126 22:41:27.762081 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:27 crc kubenswrapper[4793]: E0126 22:41:27.762632 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:29 crc kubenswrapper[4793]: I0126 22:41:29.760513 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:29 crc kubenswrapper[4793]: I0126 22:41:29.760561 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:29 crc kubenswrapper[4793]: E0126 22:41:29.760720 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:29 crc kubenswrapper[4793]: I0126 22:41:29.761002 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:29 crc kubenswrapper[4793]: I0126 22:41:29.761052 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:29 crc kubenswrapper[4793]: E0126 22:41:29.761138 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:29 crc kubenswrapper[4793]: E0126 22:41:29.761419 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:29 crc kubenswrapper[4793]: E0126 22:41:29.761654 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:30 crc kubenswrapper[4793]: I0126 22:41:30.746082 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:30 crc kubenswrapper[4793]: E0126 22:41:30.746424 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:41:30 crc kubenswrapper[4793]: E0126 22:41:30.746550 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs podName:2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc nodeName:}" failed. No retries permitted until 2026-01-26 22:42:34.746516665 +0000 UTC m=+169.735288207 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs") pod "network-metrics-daemon-7rl9w" (UID: "2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 22:41:31 crc kubenswrapper[4793]: I0126 22:41:31.760750 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:31 crc kubenswrapper[4793]: I0126 22:41:31.760833 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:31 crc kubenswrapper[4793]: I0126 22:41:31.760750 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:31 crc kubenswrapper[4793]: E0126 22:41:31.760960 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:31 crc kubenswrapper[4793]: I0126 22:41:31.761047 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:31 crc kubenswrapper[4793]: E0126 22:41:31.761240 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:31 crc kubenswrapper[4793]: E0126 22:41:31.761410 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:31 crc kubenswrapper[4793]: E0126 22:41:31.761475 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:32 crc kubenswrapper[4793]: I0126 22:41:32.761653 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:32 crc kubenswrapper[4793]: E0126 22:41:32.761919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:41:33 crc kubenswrapper[4793]: I0126 22:41:33.760311 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:33 crc kubenswrapper[4793]: I0126 22:41:33.760442 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:33 crc kubenswrapper[4793]: I0126 22:41:33.760328 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:33 crc kubenswrapper[4793]: I0126 22:41:33.760446 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:33 crc kubenswrapper[4793]: E0126 22:41:33.760623 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:33 crc kubenswrapper[4793]: E0126 22:41:33.760497 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:33 crc kubenswrapper[4793]: E0126 22:41:33.760803 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:33 crc kubenswrapper[4793]: E0126 22:41:33.760866 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:35 crc kubenswrapper[4793]: I0126 22:41:35.760695 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:35 crc kubenswrapper[4793]: I0126 22:41:35.760754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:35 crc kubenswrapper[4793]: E0126 22:41:35.761982 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:35 crc kubenswrapper[4793]: I0126 22:41:35.762005 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:35 crc kubenswrapper[4793]: I0126 22:41:35.762061 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:35 crc kubenswrapper[4793]: E0126 22:41:35.762269 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:35 crc kubenswrapper[4793]: E0126 22:41:35.762325 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:35 crc kubenswrapper[4793]: E0126 22:41:35.762482 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:37 crc kubenswrapper[4793]: I0126 22:41:37.760873 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:37 crc kubenswrapper[4793]: E0126 22:41:37.761135 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:37 crc kubenswrapper[4793]: I0126 22:41:37.761510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:37 crc kubenswrapper[4793]: I0126 22:41:37.761568 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:37 crc kubenswrapper[4793]: E0126 22:41:37.761619 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:37 crc kubenswrapper[4793]: I0126 22:41:37.761510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:37 crc kubenswrapper[4793]: E0126 22:41:37.761889 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:37 crc kubenswrapper[4793]: E0126 22:41:37.761769 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:39 crc kubenswrapper[4793]: I0126 22:41:39.760264 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:39 crc kubenswrapper[4793]: I0126 22:41:39.760338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:39 crc kubenswrapper[4793]: I0126 22:41:39.760441 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:39 crc kubenswrapper[4793]: E0126 22:41:39.760654 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:39 crc kubenswrapper[4793]: I0126 22:41:39.760773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:39 crc kubenswrapper[4793]: E0126 22:41:39.760960 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:39 crc kubenswrapper[4793]: E0126 22:41:39.761032 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:39 crc kubenswrapper[4793]: E0126 22:41:39.761131 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:41 crc kubenswrapper[4793]: I0126 22:41:41.760792 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:41 crc kubenswrapper[4793]: I0126 22:41:41.760902 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:41 crc kubenswrapper[4793]: I0126 22:41:41.760948 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:41 crc kubenswrapper[4793]: E0126 22:41:41.761233 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:41 crc kubenswrapper[4793]: I0126 22:41:41.761271 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:41 crc kubenswrapper[4793]: E0126 22:41:41.761480 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:41 crc kubenswrapper[4793]: E0126 22:41:41.761585 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:41 crc kubenswrapper[4793]: E0126 22:41:41.761669 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:43 crc kubenswrapper[4793]: I0126 22:41:43.760470 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:43 crc kubenswrapper[4793]: I0126 22:41:43.760578 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:43 crc kubenswrapper[4793]: I0126 22:41:43.760636 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:43 crc kubenswrapper[4793]: E0126 22:41:43.760803 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:43 crc kubenswrapper[4793]: E0126 22:41:43.760996 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:43 crc kubenswrapper[4793]: I0126 22:41:43.761032 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:43 crc kubenswrapper[4793]: E0126 22:41:43.761337 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:43 crc kubenswrapper[4793]: E0126 22:41:43.761695 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:43 crc kubenswrapper[4793]: I0126 22:41:43.762442 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:43 crc kubenswrapper[4793]: E0126 22:41:43.762644 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwtbk_openshift-ovn-kubernetes(358c250d-f5aa-4f0f-9fa5-7b699e6c73bf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.715254 4793 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 22:41:45 crc kubenswrapper[4793]: I0126 22:41:45.760445 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:45 crc kubenswrapper[4793]: I0126 22:41:45.760445 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:45 crc kubenswrapper[4793]: I0126 22:41:45.760523 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:45 crc kubenswrapper[4793]: I0126 22:41:45.760629 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.762428 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.762549 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.762657 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.762782 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:45 crc kubenswrapper[4793]: E0126 22:41:45.883490 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:41:47 crc kubenswrapper[4793]: I0126 22:41:47.760119 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:47 crc kubenswrapper[4793]: I0126 22:41:47.760232 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:47 crc kubenswrapper[4793]: I0126 22:41:47.760305 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:47 crc kubenswrapper[4793]: I0126 22:41:47.760133 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:47 crc kubenswrapper[4793]: E0126 22:41:47.760388 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:47 crc kubenswrapper[4793]: E0126 22:41:47.760534 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:47 crc kubenswrapper[4793]: E0126 22:41:47.760643 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:47 crc kubenswrapper[4793]: E0126 22:41:47.760797 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.460669 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/1.log" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.461453 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/0.log" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.461519 4793 generic.go:334] "Generic (PLEG): container finished" podID="2e6daa0d-7641-46e1-b9ab-8479c1cd00d6" containerID="2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298" exitCode=1 Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.461564 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerDied","Data":"2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298"} Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.461614 4793 scope.go:117] "RemoveContainer" containerID="a931d700c73820ee9fb659e6b3c5b7f148187ecf64db9b7a8300664e17c02b08" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.464003 4793 scope.go:117] "RemoveContainer" containerID="2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298" Jan 26 22:41:49 crc kubenswrapper[4793]: E0126 22:41:49.464821 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-l5qgq_openshift-multus(2e6daa0d-7641-46e1-b9ab-8479c1cd00d6)\"" pod="openshift-multus/multus-l5qgq" podUID="2e6daa0d-7641-46e1-b9ab-8479c1cd00d6" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.760522 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.760538 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:49 crc kubenswrapper[4793]: E0126 22:41:49.761136 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.760627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:49 crc kubenswrapper[4793]: I0126 22:41:49.760575 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:49 crc kubenswrapper[4793]: E0126 22:41:49.761317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:49 crc kubenswrapper[4793]: E0126 22:41:49.761443 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:49 crc kubenswrapper[4793]: E0126 22:41:49.761565 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:50 crc kubenswrapper[4793]: I0126 22:41:50.468298 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/1.log" Jan 26 22:41:50 crc kubenswrapper[4793]: E0126 22:41:50.884775 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:41:51 crc kubenswrapper[4793]: I0126 22:41:51.760968 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:51 crc kubenswrapper[4793]: I0126 22:41:51.761043 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:51 crc kubenswrapper[4793]: E0126 22:41:51.761293 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:51 crc kubenswrapper[4793]: I0126 22:41:51.761326 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:51 crc kubenswrapper[4793]: I0126 22:41:51.761458 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:51 crc kubenswrapper[4793]: E0126 22:41:51.761645 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:51 crc kubenswrapper[4793]: E0126 22:41:51.761803 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:51 crc kubenswrapper[4793]: E0126 22:41:51.762031 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:53 crc kubenswrapper[4793]: I0126 22:41:53.760445 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:53 crc kubenswrapper[4793]: I0126 22:41:53.760603 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:53 crc kubenswrapper[4793]: I0126 22:41:53.760633 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:53 crc kubenswrapper[4793]: I0126 22:41:53.760596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:53 crc kubenswrapper[4793]: E0126 22:41:53.760762 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:53 crc kubenswrapper[4793]: E0126 22:41:53.761003 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:53 crc kubenswrapper[4793]: E0126 22:41:53.761164 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:53 crc kubenswrapper[4793]: E0126 22:41:53.761365 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:55 crc kubenswrapper[4793]: I0126 22:41:55.760385 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:55 crc kubenswrapper[4793]: I0126 22:41:55.760489 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:55 crc kubenswrapper[4793]: E0126 22:41:55.762263 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:55 crc kubenswrapper[4793]: I0126 22:41:55.762339 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:55 crc kubenswrapper[4793]: I0126 22:41:55.762332 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:55 crc kubenswrapper[4793]: E0126 22:41:55.762519 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:55 crc kubenswrapper[4793]: E0126 22:41:55.762635 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:55 crc kubenswrapper[4793]: E0126 22:41:55.762814 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:55 crc kubenswrapper[4793]: E0126 22:41:55.886225 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:41:57 crc kubenswrapper[4793]: I0126 22:41:57.760428 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:57 crc kubenswrapper[4793]: E0126 22:41:57.760618 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:57 crc kubenswrapper[4793]: I0126 22:41:57.760695 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:57 crc kubenswrapper[4793]: I0126 22:41:57.760729 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:57 crc kubenswrapper[4793]: I0126 22:41:57.760785 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:57 crc kubenswrapper[4793]: E0126 22:41:57.761144 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:57 crc kubenswrapper[4793]: E0126 22:41:57.761347 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:41:57 crc kubenswrapper[4793]: E0126 22:41:57.761487 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:58 crc kubenswrapper[4793]: I0126 22:41:58.761777 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.503617 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/3.log" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.508047 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerStarted","Data":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.508525 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.538896 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podStartSLOduration=107.538877148 podStartE2EDuration="1m47.538877148s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:41:59.538705292 +0000 UTC m=+134.527476824" watchObservedRunningTime="2026-01-26 22:41:59.538877148 +0000 UTC m=+134.527648660" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.697319 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7rl9w"] Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.697452 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:41:59 crc kubenswrapper[4793]: E0126 22:41:59.697560 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.760002 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.760002 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:41:59 crc kubenswrapper[4793]: I0126 22:41:59.760030 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:41:59 crc kubenswrapper[4793]: E0126 22:41:59.760166 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:41:59 crc kubenswrapper[4793]: E0126 22:41:59.760467 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:41:59 crc kubenswrapper[4793]: E0126 22:41:59.760518 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:00 crc kubenswrapper[4793]: E0126 22:42:00.887725 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:42:01 crc kubenswrapper[4793]: I0126 22:42:01.760971 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:01 crc kubenswrapper[4793]: I0126 22:42:01.761044 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:01 crc kubenswrapper[4793]: I0126 22:42:01.761074 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:01 crc kubenswrapper[4793]: E0126 22:42:01.761234 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:42:01 crc kubenswrapper[4793]: I0126 22:42:01.761299 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:01 crc kubenswrapper[4793]: E0126 22:42:01.761455 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:01 crc kubenswrapper[4793]: E0126 22:42:01.761557 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:42:01 crc kubenswrapper[4793]: E0126 22:42:01.761666 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:42:03 crc kubenswrapper[4793]: I0126 22:42:03.760098 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:03 crc kubenswrapper[4793]: I0126 22:42:03.760111 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:03 crc kubenswrapper[4793]: I0126 22:42:03.760334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:03 crc kubenswrapper[4793]: E0126 22:42:03.760526 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:42:03 crc kubenswrapper[4793]: I0126 22:42:03.760566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:03 crc kubenswrapper[4793]: E0126 22:42:03.760754 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:03 crc kubenswrapper[4793]: E0126 22:42:03.760920 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:42:03 crc kubenswrapper[4793]: E0126 22:42:03.761178 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:42:04 crc kubenswrapper[4793]: I0126 22:42:04.760863 4793 scope.go:117] "RemoveContainer" containerID="2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298" Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.535066 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/1.log" Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.535608 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerStarted","Data":"99049525e6cacaf6c4ab17030e1fd8c38dba224b6ec01ee662a23e82658ca382"} Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.761433 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.761474 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.761556 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:05 crc kubenswrapper[4793]: E0126 22:42:05.762771 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:42:05 crc kubenswrapper[4793]: I0126 22:42:05.762793 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:05 crc kubenswrapper[4793]: E0126 22:42:05.762948 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:42:05 crc kubenswrapper[4793]: E0126 22:42:05.763059 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:42:05 crc kubenswrapper[4793]: E0126 22:42:05.763127 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:05 crc kubenswrapper[4793]: E0126 22:42:05.889237 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:42:07 crc kubenswrapper[4793]: I0126 22:42:07.760763 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:07 crc kubenswrapper[4793]: I0126 22:42:07.760855 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:07 crc kubenswrapper[4793]: I0126 22:42:07.760916 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:07 crc kubenswrapper[4793]: I0126 22:42:07.760997 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:07 crc kubenswrapper[4793]: E0126 22:42:07.761176 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:42:07 crc kubenswrapper[4793]: E0126 22:42:07.761367 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:42:07 crc kubenswrapper[4793]: E0126 22:42:07.761576 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:07 crc kubenswrapper[4793]: E0126 22:42:07.761699 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:42:09 crc kubenswrapper[4793]: I0126 22:42:09.759991 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:09 crc kubenswrapper[4793]: I0126 22:42:09.760093 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:09 crc kubenswrapper[4793]: I0126 22:42:09.760139 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:09 crc kubenswrapper[4793]: I0126 22:42:09.760161 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:09 crc kubenswrapper[4793]: E0126 22:42:09.760318 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 22:42:09 crc kubenswrapper[4793]: E0126 22:42:09.760544 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7rl9w" podUID="2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc" Jan 26 22:42:09 crc kubenswrapper[4793]: E0126 22:42:09.760669 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 22:42:09 crc kubenswrapper[4793]: E0126 22:42:09.760848 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.760237 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.760259 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.760947 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.761231 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.763174 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.764114 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.765009 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.765067 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.765221 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 22:42:11 crc kubenswrapper[4793]: I0126 22:42:11.765387 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.542142 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:13 crc kubenswrapper[4793]: E0126 22:42:13.542397 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:44:15.542351002 +0000 UTC m=+270.531122554 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.542973 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.543031 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.543085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.543125 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.544820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.551375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.552234 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.553336 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.586817 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.600712 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 22:42:13 crc kubenswrapper[4793]: I0126 22:42:13.624930 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:13 crc kubenswrapper[4793]: W0126 22:42:13.923783 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-6e603ffb10c2b9a8a36ebcda7fcbb479159dd06d47b17da8ad2165ac326bcd98 WatchSource:0}: Error finding container 6e603ffb10c2b9a8a36ebcda7fcbb479159dd06d47b17da8ad2165ac326bcd98: Status 404 returned error can't find the container with id 6e603ffb10c2b9a8a36ebcda7fcbb479159dd06d47b17da8ad2165ac326bcd98 Jan 26 22:42:14 crc kubenswrapper[4793]: W0126 22:42:14.191136 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-9cca13378251c7756913aa36f814f2ab9f345b1b1dc4324691b51048b62d12ef WatchSource:0}: Error finding container 9cca13378251c7756913aa36f814f2ab9f345b1b1dc4324691b51048b62d12ef: Status 404 returned error can't find the container with id 9cca13378251c7756913aa36f814f2ab9f345b1b1dc4324691b51048b62d12ef Jan 26 22:42:14 crc kubenswrapper[4793]: W0126 22:42:14.211087 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-43cb98d1c3b07bf4e268aeb02ba287e41127cf29f89b3cf70e0a7a4bbf06962b WatchSource:0}: Error finding container 43cb98d1c3b07bf4e268aeb02ba287e41127cf29f89b3cf70e0a7a4bbf06962b: Status 404 returned error can't find the container with id 43cb98d1c3b07bf4e268aeb02ba287e41127cf29f89b3cf70e0a7a4bbf06962b Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.575171 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2f088f845299118c92b9bc33deda902cd2ec4f845f6b4347173c9f161e6966dc"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.576026 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9cca13378251c7756913aa36f814f2ab9f345b1b1dc4324691b51048b62d12ef"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.579169 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"aaf52168a53ffd309e321091c60bc0d60a2322b0deadea08631dbd221d5731fc"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.580553 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"43cb98d1c3b07bf4e268aeb02ba287e41127cf29f89b3cf70e0a7a4bbf06962b"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.582649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3df8a6d34f47f8f5fa901d76254e2e15932e69736a3fcdbbc01d5dc978a0f917"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.582692 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6e603ffb10c2b9a8a36ebcda7fcbb479159dd06d47b17da8ad2165ac326bcd98"} Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.582955 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.797857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.856075 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nd4pl"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.857224 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.864268 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.864274 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.873164 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.873699 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.878382 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.885029 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.886438 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.886707 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.896329 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.896378 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.896720 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.896849 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.897818 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.899894 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.900789 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.901720 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xhc46"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.904785 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.905059 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.905399 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.907201 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.907534 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.908527 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.908977 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.915594 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.918019 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.918858 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.919585 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.920037 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.920730 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.921587 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.923010 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.923637 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.923891 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.924018 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.924371 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.924750 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.925018 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.925284 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.925598 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.925859 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.926097 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.926465 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.927343 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.927469 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.927816 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.928343 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g9vhm"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.928731 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.929347 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.931173 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.931523 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.931735 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.931941 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.932130 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.932354 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.932551 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.932742 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.932587 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933136 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933439 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933371 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933867 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.933910 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.934054 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.934130 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.934242 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.934302 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.934840 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935066 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935168 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935097 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935386 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935434 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935635 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935715 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.935645 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.936335 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.936507 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.938963 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939225 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939566 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939696 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939847 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939926 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.940039 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.940091 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.939572 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.940264 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.940559 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxxfj"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.940807 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.941011 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.941428 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.941834 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.941896 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.953535 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.954771 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.957685 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2fq8c"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.972182 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.975516 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.975782 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.975872 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8"] Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.976036 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977389 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977551 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977555 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977807 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977970 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.978014 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.978059 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.977912 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.978270 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.978636 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.978857 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.979705 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.979931 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.979980 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.980148 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.980515 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.996144 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.996406 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.998404 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999083 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-node-pullsecrets\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999136 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-audit\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999185 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-serving-cert\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999217 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmmw\" (UniqueName: \"kubernetes.io/projected/1b75b5dd-f846-44d1-b751-5d8241200a89-kube-api-access-8dmmw\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwn7v\" (UniqueName: \"kubernetes.io/projected/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-kube-api-access-hwn7v\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999271 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-auth-proxy-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999301 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999362 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-serving-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999383 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-policies\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999399 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-client\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999423 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtkq\" (UniqueName: \"kubernetes.io/projected/49db6ef3-ecbb-44ba-90bd-6b4fad356374-kube-api-access-9wtkq\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999445 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-encryption-config\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-images\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999478 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999498 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49db6ef3-ecbb-44ba-90bd-6b4fad356374-machine-approver-tls\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999513 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-client\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999528 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:14 crc kubenswrapper[4793]: I0126 22:42:14.999565 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999589 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-config\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999609 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-dir\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999633 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-serving-cert\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999650 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-audit-dir\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999688 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999707 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-image-import-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999734 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-encryption-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999761 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmqdr\" (UniqueName: \"kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999796 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpnk\" (UniqueName: \"kubernetes.io/projected/108460a3-822d-405c-a0fa-cdd12ea4123f-kube-api-access-fvpnk\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:14.999825 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.000489 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.000863 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.002327 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.004156 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.005083 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.005409 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.006448 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.007716 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.009196 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.010539 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.011172 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.011537 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.013170 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.018255 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.018554 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.019063 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.019428 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.019727 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2l78j"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.019852 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.020134 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.020392 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.020545 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.020679 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.020821 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.022170 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.022431 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.024502 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.026633 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.027496 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.030575 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.031018 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.031335 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.031540 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.045310 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xpkxl"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.045613 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.045948 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kszs"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.046492 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.046955 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.047654 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.047935 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.048374 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vn8zr"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.048974 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.049146 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pj7sl"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.049727 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.066380 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.068373 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.069618 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.070117 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.070519 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.071944 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.074269 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.075757 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.077754 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.080937 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.081128 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xhc46"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.086077 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.088959 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.089307 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.090618 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nd4pl"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.090781 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.091023 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.093241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.093843 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.094707 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.095913 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.096862 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.098343 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.099850 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100257 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100292 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100317 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100342 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvbn\" (UniqueName: \"kubernetes.io/projected/614c35d6-ed0a-4b6d-9241-6df532fa9528-kube-api-access-sfvbn\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100367 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100391 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2f7z\" (UniqueName: \"kubernetes.io/projected/a7012682-3c72-4541-ae5b-5c1522508f39-kube-api-access-l2f7z\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100414 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms66w\" (UniqueName: \"kubernetes.io/projected/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-kube-api-access-ms66w\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100440 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-config\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100465 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-dir\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100490 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7012682-3c72-4541-ae5b-5c1522508f39-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100511 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82sc\" (UniqueName: \"kubernetes.io/projected/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-kube-api-access-p82sc\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100536 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrwc\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-kube-api-access-mfrwc\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100559 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f225855-c898-4519-96fb-c0556fb46513-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100581 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8jh\" (UniqueName: \"kubernetes.io/projected/57535671-8438-44b0-95f3-24679160fb8d-kube-api-access-vn8jh\") pod \"downloads-7954f5f757-2l78j\" (UID: \"57535671-8438-44b0-95f3-24679160fb8d\") " pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100603 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-serving-cert\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100646 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-audit-dir\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100671 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-config\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-config\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100748 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100771 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-image-import-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100819 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-encryption-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100884 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmqdr\" (UniqueName: \"kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100907 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-config\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100929 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f111a8c3-da3b-48f7-aad2-693c62b93659-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100953 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-trusted-ca\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100975 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f111a8c3-da3b-48f7-aad2-693c62b93659-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.100997 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101020 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czsfh\" (UniqueName: \"kubernetes.io/projected/36b0f3df-e65a-41d3-b718-916bd868f437-kube-api-access-czsfh\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101042 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b0f3df-e65a-41d3-b718-916bd868f437-serving-cert\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101064 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-metrics-tls\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101086 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnfcn\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-kube-api-access-rnfcn\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101106 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f225855-c898-4519-96fb-c0556fb46513-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101129 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvpnk\" (UniqueName: \"kubernetes.io/projected/108460a3-822d-405c-a0fa-cdd12ea4123f-kube-api-access-fvpnk\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101154 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101208 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101234 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101255 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101320 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f111a8c3-da3b-48f7-aad2-693c62b93659-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101345 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101374 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101396 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101419 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101442 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101474 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101489 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101515 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101541 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjx6p\" (UniqueName: \"kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101564 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-node-pullsecrets\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101587 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-audit\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101608 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-serving-cert\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmmw\" (UniqueName: \"kubernetes.io/projected/1b75b5dd-f846-44d1-b751-5d8241200a89-kube-api-access-8dmmw\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101644 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101673 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f225855-c898-4519-96fb-c0556fb46513-config\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.101695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.102044 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-dir\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.102382 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-audit-dir\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.102470 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.102790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/108460a3-822d-405c-a0fa-cdd12ea4123f-node-pullsecrets\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.103851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104077 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-config\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104342 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-audit\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104555 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104583 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwn7v\" (UniqueName: \"kubernetes.io/projected/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-kube-api-access-hwn7v\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104608 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-serving-cert\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104631 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-service-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104636 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-image-import-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104655 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-auth-proxy-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104682 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f59h\" (UniqueName: \"kubernetes.io/projected/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-kube-api-access-4f59h\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104713 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbn82\" (UniqueName: \"kubernetes.io/projected/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-kube-api-access-mbn82\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104736 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104768 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104797 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104815 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104856 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/614c35d6-ed0a-4b6d-9241-6df532fa9528-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104896 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104914 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104959 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.104974 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105015 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105050 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-serving-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105102 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105171 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-policies\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-client\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-config\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105322 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105340 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wtkq\" (UniqueName: \"kubernetes.io/projected/49db6ef3-ecbb-44ba-90bd-6b4fad356374-kube-api-access-9wtkq\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105387 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105412 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-encryption-config\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105429 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-auth-proxy-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105447 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105472 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-images\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105549 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105566 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f869ac-7928-4749-9ba7-04ec01b48bc0-serving-cert\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105583 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqq8q\" (UniqueName: \"kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105598 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7012682-3c72-4541-ae5b-5c1522508f39-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105614 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105638 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49db6ef3-ecbb-44ba-90bd-6b4fad356374-machine-approver-tls\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105655 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p74mk\" (UniqueName: \"kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-trusted-ca\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72nth\" (UniqueName: \"kubernetes.io/projected/f6f869ac-7928-4749-9ba7-04ec01b48bc0-kube-api-access-72nth\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105717 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-client\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105733 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-client\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.105768 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.106137 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.106475 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-images\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.106575 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-serving-ca\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.106626 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49db6ef3-ecbb-44ba-90bd-6b4fad356374-config\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.107407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.108232 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-serving-cert\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.109520 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1b75b5dd-f846-44d1-b751-5d8241200a89-audit-policies\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.109769 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-etcd-client\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.109834 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-encryption-config\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.110305 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49db6ef3-ecbb-44ba-90bd-6b4fad356374-machine-approver-tls\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.110355 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b75b5dd-f846-44d1-b751-5d8241200a89-serving-cert\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.110616 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.110787 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-etcd-client\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.111149 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.111271 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jv28b"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.111291 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/108460a3-822d-405c-a0fa-cdd12ea4123f-encryption-config\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.112610 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.113089 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.115139 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g9vhm"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.116811 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2fq8c"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.117243 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.118078 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.119251 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.119459 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-65h9z"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.120084 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.120731 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.123624 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.125881 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.127172 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.128501 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.129811 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2l78j"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.132732 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pj7sl"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.133120 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.135274 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.137209 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.138829 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxxfj"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.139836 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.141087 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.143265 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.145442 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kszs"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.146693 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.148523 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.150030 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.153606 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.155157 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rz5xt"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.156172 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.156456 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfbz"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.157646 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.158030 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.159314 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.161216 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vn8zr"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.162381 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.163504 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.164892 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.166198 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.167314 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rz5xt"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.168757 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.178988 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.180935 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jv28b"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.183609 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfbz"] Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.199945 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.206511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-config\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.206549 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.206617 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.206762 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-config\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207607 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f869ac-7928-4749-9ba7-04ec01b48bc0-serving-cert\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207764 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqq8q\" (UniqueName: \"kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207807 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7012682-3c72-4541-ae5b-5c1522508f39-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207830 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207940 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207970 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.207992 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p74mk\" (UniqueName: \"kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208040 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-trusted-ca\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208057 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72nth\" (UniqueName: \"kubernetes.io/projected/f6f869ac-7928-4749-9ba7-04ec01b48bc0-kube-api-access-72nth\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208077 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-client\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208085 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208096 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208159 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208202 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfvbn\" (UniqueName: \"kubernetes.io/projected/614c35d6-ed0a-4b6d-9241-6df532fa9528-kube-api-access-sfvbn\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208223 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208437 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2f7z\" (UniqueName: \"kubernetes.io/projected/a7012682-3c72-4541-ae5b-5c1522508f39-kube-api-access-l2f7z\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms66w\" (UniqueName: \"kubernetes.io/projected/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-kube-api-access-ms66w\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208480 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7012682-3c72-4541-ae5b-5c1522508f39-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208497 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p82sc\" (UniqueName: \"kubernetes.io/projected/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-kube-api-access-p82sc\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208513 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfrwc\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-kube-api-access-mfrwc\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208522 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208531 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f225855-c898-4519-96fb-c0556fb46513-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208615 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8jh\" (UniqueName: \"kubernetes.io/projected/57535671-8438-44b0-95f3-24679160fb8d-kube-api-access-vn8jh\") pod \"downloads-7954f5f757-2l78j\" (UID: \"57535671-8438-44b0-95f3-24679160fb8d\") " pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208639 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208694 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-config\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208711 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-config\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208735 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208757 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208782 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208808 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-config\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208873 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f111a8c3-da3b-48f7-aad2-693c62b93659-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czsfh\" (UniqueName: \"kubernetes.io/projected/36b0f3df-e65a-41d3-b718-916bd868f437-kube-api-access-czsfh\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-trusted-ca\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f111a8c3-da3b-48f7-aad2-693c62b93659-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.208975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b0f3df-e65a-41d3-b718-916bd868f437-serving-cert\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209092 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-metrics-tls\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209117 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnfcn\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-kube-api-access-rnfcn\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209153 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209171 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f225855-c898-4519-96fb-c0556fb46513-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209208 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs6g7\" (UniqueName: \"kubernetes.io/projected/1c808b47-7d86-4456-ae60-cb83d2a58262-kube-api-access-xs6g7\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209241 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209262 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209346 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209384 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209386 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f111a8c3-da3b-48f7-aad2-693c62b93659-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209517 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209564 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209594 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209619 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209648 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209764 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjx6p\" (UniqueName: \"kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209761 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7012682-3c72-4541-ae5b-5c1522508f39-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209812 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209876 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f225855-c898-4519-96fb-c0556fb46513-config\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209902 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqs7g\" (UniqueName: \"kubernetes.io/projected/81bdaed4-2088-404f-a937-9a682635b5ab-kube-api-access-kqs7g\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-config\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210118 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210477 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209942 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210631 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.209766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210872 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-serving-cert\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210900 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-service-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210922 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-images\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f59h\" (UniqueName: \"kubernetes.io/projected/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-kube-api-access-4f59h\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210965 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c808b47-7d86-4456-ae60-cb83d2a58262-proxy-tls\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbn82\" (UniqueName: \"kubernetes.io/projected/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-kube-api-access-mbn82\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211008 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211048 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211066 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2lx\" (UniqueName: \"kubernetes.io/projected/4f027876-8332-446c-9ea2-29d38ba7fcfc-kube-api-access-pq2lx\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211091 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211110 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/614c35d6-ed0a-4b6d-9241-6df532fa9528-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211131 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211150 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211178 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211214 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.210793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-config\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211214 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211331 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.211380 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212084 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212330 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f6f869ac-7928-4749-9ba7-04ec01b48bc0-trusted-ca\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212343 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212514 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212578 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.212900 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.213489 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7012682-3c72-4541-ae5b-5c1522508f39-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.214048 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b0f3df-e65a-41d3-b718-916bd868f437-service-ca-bundle\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.214102 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.214504 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.214860 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.215517 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216014 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216040 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216177 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-service-ca\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216249 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216343 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216537 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b0f3df-e65a-41d3-b718-916bd868f437-serving-cert\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216683 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/614c35d6-ed0a-4b6d-9241-6df532fa9528-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.216691 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.217585 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-etcd-client\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.218226 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f869ac-7928-4749-9ba7-04ec01b48bc0-serving-cert\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.218500 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f225855-c898-4519-96fb-c0556fb46513-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.218833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-serving-cert\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219095 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219446 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219564 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219738 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219854 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.219938 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.220323 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.220693 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.221234 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.221884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f225855-c898-4519-96fb-c0556fb46513-config\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.227856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.239205 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.260118 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.279624 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.299024 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.313534 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.313943 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314083 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs6g7\" (UniqueName: \"kubernetes.io/projected/1c808b47-7d86-4456-ae60-cb83d2a58262-kube-api-access-xs6g7\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqs7g\" (UniqueName: \"kubernetes.io/projected/81bdaed4-2088-404f-a937-9a682635b5ab-kube-api-access-kqs7g\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314150 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314399 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-images\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314519 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c808b47-7d86-4456-ae60-cb83d2a58262-proxy-tls\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314617 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314719 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq2lx\" (UniqueName: \"kubernetes.io/projected/4f027876-8332-446c-9ea2-29d38ba7fcfc-kube-api-access-pq2lx\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314852 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.314948 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.319659 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.340386 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.350155 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.359853 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.369709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-metrics-tls\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.392604 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.400414 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.402457 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-trusted-ca\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.419577 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.440348 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.445943 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.459725 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.461180 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-config\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.480510 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.498857 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.508378 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.519617 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.522762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.539960 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.559422 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.563407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f111a8c3-da3b-48f7-aad2-693c62b93659-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.580391 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.602818 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.611361 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f111a8c3-da3b-48f7-aad2-693c62b93659-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.619604 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.626127 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c808b47-7d86-4456-ae60-cb83d2a58262-images\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.639281 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.659470 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.670929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c808b47-7d86-4456-ae60-cb83d2a58262-proxy-tls\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.680677 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.700875 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.721024 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.740127 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.760723 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.780322 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.800356 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.820671 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.838972 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.859272 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.881179 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.900233 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.919486 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.941627 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.960719 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 22:42:15 crc kubenswrapper[4793]: I0126 22:42:15.980427 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.002723 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.019453 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.039501 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.055409 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.057730 4793 request.go:700] Waited for 1.008629303s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.060911 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.080632 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.099952 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.120279 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.142004 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.161289 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.181225 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.200868 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.220459 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.240089 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.260396 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.279953 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.301173 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.314811 4793 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.314905 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config podName:81bdaed4-2088-404f-a937-9a682635b5ab nodeName:}" failed. No retries permitted until 2026-01-26 22:42:16.814886213 +0000 UTC m=+151.803657725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config") pod "service-ca-operator-777779d784-bcdkj" (UID: "81bdaed4-2088-404f-a937-9a682635b5ab") : failed to sync configmap cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.314994 4793 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.315059 4793 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.315093 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert podName:4f027876-8332-446c-9ea2-29d38ba7fcfc nodeName:}" failed. No retries permitted until 2026-01-26 22:42:16.815060839 +0000 UTC m=+151.803832381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert") pod "catalog-operator-68c6474976-r4dq5" (UID: "4f027876-8332-446c-9ea2-29d38ba7fcfc") : failed to sync secret cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: E0126 22:42:16.315129 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert podName:81bdaed4-2088-404f-a937-9a682635b5ab nodeName:}" failed. No retries permitted until 2026-01-26 22:42:16.81511071 +0000 UTC m=+151.803882252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert") pod "service-ca-operator-777779d784-bcdkj" (UID: "81bdaed4-2088-404f-a937-9a682635b5ab") : failed to sync secret cache: timed out waiting for the condition Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.320841 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.340011 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.360805 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.379437 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.399776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.419689 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.439333 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.462046 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.480550 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.499763 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.519232 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.553984 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.560589 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.579843 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.599591 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.621152 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.671748 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvpnk\" (UniqueName: \"kubernetes.io/projected/108460a3-822d-405c-a0fa-cdd12ea4123f-kube-api-access-fvpnk\") pod \"apiserver-76f77b778f-nd4pl\" (UID: \"108460a3-822d-405c-a0fa-cdd12ea4123f\") " pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.690680 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmqdr\" (UniqueName: \"kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr\") pod \"route-controller-manager-6576b87f9c-r5m9x\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.712459 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.718179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmmw\" (UniqueName: \"kubernetes.io/projected/1b75b5dd-f846-44d1-b751-5d8241200a89-kube-api-access-8dmmw\") pod \"apiserver-7bbb656c7d-dw8lf\" (UID: \"1b75b5dd-f846-44d1-b751-5d8241200a89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.730958 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwn7v\" (UniqueName: \"kubernetes.io/projected/8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a-kube-api-access-hwn7v\") pod \"machine-api-operator-5694c8668f-xhc46\" (UID: \"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.741809 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.742566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.751094 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wtkq\" (UniqueName: \"kubernetes.io/projected/49db6ef3-ecbb-44ba-90bd-6b4fad356374-kube-api-access-9wtkq\") pod \"machine-approver-56656f9798-h84t5\" (UID: \"49db6ef3-ecbb-44ba-90bd-6b4fad356374\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.760849 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.775663 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.780550 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.803780 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.808581 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.827183 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 22:42:16 crc kubenswrapper[4793]: W0126 22:42:16.829915 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49db6ef3_ecbb_44ba_90bd_6b4fad356374.slice/crio-714265be8142869c8bc0d2923ebd1e4de47646776e33e15513a03576fafc7bd1 WatchSource:0}: Error finding container 714265be8142869c8bc0d2923ebd1e4de47646776e33e15513a03576fafc7bd1: Status 404 returned error can't find the container with id 714265be8142869c8bc0d2923ebd1e4de47646776e33e15513a03576fafc7bd1 Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.839174 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.839297 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.839457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.839846 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.840923 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.846068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bdaed4-2088-404f-a937-9a682635b5ab-config\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.849225 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f027876-8332-446c-9ea2-29d38ba7fcfc-srv-cert\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.850283 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bdaed4-2088-404f-a937-9a682635b5ab-serving-cert\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.901822 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.924576 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.938846 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.958615 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.980917 4793 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 22:42:16 crc kubenswrapper[4793]: I0126 22:42:16.986440 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nd4pl"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:16.999802 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.005323 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf"] Jan 26 22:42:17 crc kubenswrapper[4793]: W0126 22:42:17.016283 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b75b5dd_f846_44d1_b751_5d8241200a89.slice/crio-7cee3695f290c2b3f1586199616cb7e87d39ef0e39a5c1f2a9f0b8913294bf0a WatchSource:0}: Error finding container 7cee3695f290c2b3f1586199616cb7e87d39ef0e39a5c1f2a9f0b8913294bf0a: Status 404 returned error can't find the container with id 7cee3695f290c2b3f1586199616cb7e87d39ef0e39a5c1f2a9f0b8913294bf0a Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.019977 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.034149 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.055894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqq8q\" (UniqueName: \"kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q\") pod \"console-f9d7485db-57ccj\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.072969 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p74mk\" (UniqueName: \"kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk\") pod \"controller-manager-879f6c89f-2jn5q\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.078177 4793 request.go:700] Waited for 1.869508847s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.084610 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xhc46"] Jan 26 22:42:17 crc kubenswrapper[4793]: W0126 22:42:17.093407 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b2fe8d2_137a_46a7_a57e_94a1c4d91f8a.slice/crio-493750e07675f90444f80ef2666e3951efa20c54ec7fe48dd7f4e35295754851 WatchSource:0}: Error finding container 493750e07675f90444f80ef2666e3951efa20c54ec7fe48dd7f4e35295754851: Status 404 returned error can't find the container with id 493750e07675f90444f80ef2666e3951efa20c54ec7fe48dd7f4e35295754851 Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.096461 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfvbn\" (UniqueName: \"kubernetes.io/projected/614c35d6-ed0a-4b6d-9241-6df532fa9528-kube-api-access-sfvbn\") pod \"cluster-samples-operator-665b6dd947-bscpb\" (UID: \"614c35d6-ed0a-4b6d-9241-6df532fa9528\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.121023 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms66w\" (UniqueName: \"kubernetes.io/projected/b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af-kube-api-access-ms66w\") pod \"openshift-config-operator-7777fb866f-x2n4v\" (UID: \"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.134099 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea3e8958-c4f8-41bc-b5ca-6be701416ea7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ml4kf\" (UID: \"ea3e8958-c4f8-41bc-b5ca-6be701416ea7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.152591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfrwc\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-kube-api-access-mfrwc\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.176464 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.180473 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.194109 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f225855-c898-4519-96fb-c0556fb46513-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qmgss\" (UID: \"2f225855-c898-4519-96fb-c0556fb46513\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.199063 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.214080 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2f7z\" (UniqueName: \"kubernetes.io/projected/a7012682-3c72-4541-ae5b-5c1522508f39-kube-api-access-l2f7z\") pod \"openshift-controller-manager-operator-756b6f6bc6-99f2s\" (UID: \"a7012682-3c72-4541-ae5b-5c1522508f39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.238081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p82sc\" (UniqueName: \"kubernetes.io/projected/3d20f3c1-8995-4eb3-9c83-3219e7ad35ec-kube-api-access-p82sc\") pod \"openshift-apiserver-operator-796bbdcf4f-x8bfn\" (UID: \"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.240800 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.255554 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.256179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8jh\" (UniqueName: \"kubernetes.io/projected/57535671-8438-44b0-95f3-24679160fb8d-kube-api-access-vn8jh\") pod \"downloads-7954f5f757-2l78j\" (UID: \"57535671-8438-44b0-95f3-24679160fb8d\") " pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.263716 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.270180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.276795 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72nth\" (UniqueName: \"kubernetes.io/projected/f6f869ac-7928-4749-9ba7-04ec01b48bc0-kube-api-access-72nth\") pod \"console-operator-58897d9998-g9vhm\" (UID: \"f6f869ac-7928-4749-9ba7-04ec01b48bc0\") " pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.296109 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.299134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnfcn\" (UniqueName: \"kubernetes.io/projected/cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c-kube-api-access-rnfcn\") pod \"cluster-image-registry-operator-dc59b4c8b-mb5p8\" (UID: \"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.304251 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.315142 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f111a8c3-da3b-48f7-aad2-693c62b93659-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-msvjw\" (UID: \"f111a8c3-da3b-48f7-aad2-693c62b93659\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.376348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.379156 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czsfh\" (UniqueName: \"kubernetes.io/projected/36b0f3df-e65a-41d3-b718-916bd868f437-kube-api-access-czsfh\") pod \"authentication-operator-69f744f599-gxxfj\" (UID: \"36b0f3df-e65a-41d3-b718-916bd868f437\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.385396 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.389577 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/426a3518-6fb9-4c1a-ab27-5a6c6222cd2d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5mjq6\" (UID: \"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.392341 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjx6p\" (UniqueName: \"kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p\") pod \"oauth-openshift-558db77b4-lvnpc\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.396104 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f59h\" (UniqueName: \"kubernetes.io/projected/1d457a27-1fbc-43fd-81b8-3d0b1e495f0e-kube-api-access-4f59h\") pod \"kube-storage-version-migrator-operator-b67b599dd-j8jkh\" (UID: \"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.416160 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbn82\" (UniqueName: \"kubernetes.io/projected/6ae9ab97-d0d9-4ba4-a842-fa74943633ba-kube-api-access-mbn82\") pod \"etcd-operator-b45778765-2fq8c\" (UID: \"6ae9ab97-d0d9-4ba4-a842-fa74943633ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.432868 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.435828 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs6g7\" (UniqueName: \"kubernetes.io/projected/1c808b47-7d86-4456-ae60-cb83d2a58262-kube-api-access-xs6g7\") pod \"machine-config-operator-74547568cd-nrmwq\" (UID: \"1c808b47-7d86-4456-ae60-cb83d2a58262\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.463153 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqs7g\" (UniqueName: \"kubernetes.io/projected/81bdaed4-2088-404f-a937-9a682635b5ab-kube-api-access-kqs7g\") pod \"service-ca-operator-777779d784-bcdkj\" (UID: \"81bdaed4-2088-404f-a937-9a682635b5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.474613 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq2lx\" (UniqueName: \"kubernetes.io/projected/4f027876-8332-446c-9ea2-29d38ba7fcfc-kube-api-access-pq2lx\") pod \"catalog-operator-68c6474976-r4dq5\" (UID: \"4f027876-8332-446c-9ea2-29d38ba7fcfc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.491223 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.521993 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.524082 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.524991 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.542118 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.546441 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.547929 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-tmpfs\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575294 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldcbw\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-metrics-certs\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575332 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z729h\" (UniqueName: \"kubernetes.io/projected/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-kube-api-access-z729h\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575349 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-certs\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575367 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dtg\" (UniqueName: \"kubernetes.io/projected/801796b0-9f86-4104-90bb-1722280f5bfd-kube-api-access-p7dtg\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.575486 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/801796b0-9f86-4104-90bb-1722280f5bfd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576070 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dfaef85e-a778-46ff-976e-616a2128a811-signing-key\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576102 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576137 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjp9x\" (UniqueName: \"kubernetes.io/projected/0ed71eac-dbf3-4832-add7-58fd93339c41-kube-api-access-wjp9x\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576157 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-stats-auth\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576238 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-webhook-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576475 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-default-certificate\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576515 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.576532 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.578970 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581298 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdr95\" (UniqueName: \"kubernetes.io/projected/685cffa9-987c-4800-b908-d5a6716e25b4-kube-api-access-jdr95\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581379 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2nr5\" (UniqueName: \"kubernetes.io/projected/dfaef85e-a778-46ff-976e-616a2128a811-kube-api-access-p2nr5\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581441 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-service-ca-bundle\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581626 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-node-bootstrap-token\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581645 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-apiservice-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581691 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581727 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581773 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-srv-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581790 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvrgr\" (UniqueName: \"kubernetes.io/projected/a66e9f9f-4696-49ba-acab-6e131a5efb91-kube-api-access-xvrgr\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.581811 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4cbx\" (UniqueName: \"kubernetes.io/projected/74772f4c-89ee-4080-9cb4-90ef4170a726-kube-api-access-v4cbx\") pod \"migrator-59844c95c7-dlt29\" (UID: \"74772f4c-89ee-4080-9cb4-90ef4170a726\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4s7z\" (UniqueName: \"kubernetes.io/projected/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-kube-api-access-m4s7z\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582158 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582178 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582335 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.582427 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4bq9\" (UniqueName: \"kubernetes.io/projected/09d90176-aba2-4498-a4c4-dc240df81c98-kube-api-access-l4bq9\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584110 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/41617d0a-24b4-47c9-b970-4cbab31285eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584163 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6wv4\" (UniqueName: \"kubernetes.io/projected/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-kube-api-access-n6wv4\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584428 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09d90176-aba2-4498-a4c4-dc240df81c98-metrics-tls\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584565 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f4s7\" (UniqueName: \"kubernetes.io/projected/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-kube-api-access-2f4s7\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.584584 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.084564221 +0000 UTC m=+153.073335733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584626 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9m7s\" (UniqueName: \"kubernetes.io/projected/41617d0a-24b4-47c9-b970-4cbab31285eb-kube-api-access-m9m7s\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584789 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584905 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqv7k\" (UniqueName: \"kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584934 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a66e9f9f-4696-49ba-acab-6e131a5efb91-config-volume\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584955 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a66e9f9f-4696-49ba-acab-6e131a5efb91-metrics-tls\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.584999 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dfaef85e-a778-46ff-976e-616a2128a811-signing-cabundle\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.585181 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9mq4\" (UniqueName: \"kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.585799 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.586367 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.586703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-proxy-tls\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.586729 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.613964 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.620994 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" event={"ID":"49db6ef3-ecbb-44ba-90bd-6b4fad356374","Type":"ContainerStarted","Data":"e9dd6b06687e6f394029beaa70efd6dbb9f84f49af37365addaa8a9a121ad724"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.621386 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" event={"ID":"49db6ef3-ecbb-44ba-90bd-6b4fad356374","Type":"ContainerStarted","Data":"39767c6b73ed145b14edede4182e51a7be323599092779f886fca5215ce1dbe4"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.621398 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" event={"ID":"49db6ef3-ecbb-44ba-90bd-6b4fad356374","Type":"ContainerStarted","Data":"714265be8142869c8bc0d2923ebd1e4de47646776e33e15513a03576fafc7bd1"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.622551 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.629580 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.641325 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" event={"ID":"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a","Type":"ContainerStarted","Data":"96bf08a8416db674215d43cafdaad7142ec78bb80457a16a6b41c8934a8f7f31"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.641408 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" event={"ID":"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a","Type":"ContainerStarted","Data":"35ef9fd4d9f1a8c21342432b8e94e83c86e086b0ceab6b71120d185cc8b1ccf3"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.641427 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.651746 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r5m9x container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.651809 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.671395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" event={"ID":"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a","Type":"ContainerStarted","Data":"e89846f52d475c475c4920686e04aeb7e7fb70427680812e0e1875c0100432ce"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.671450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" event={"ID":"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a","Type":"ContainerStarted","Data":"6f8073104a10054235daa35ee7eab491effbd82f3b32302a022237546d412b0f"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.671460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" event={"ID":"8b2fe8d2-137a-46a7-a57e-94a1c4d91f8a","Type":"ContainerStarted","Data":"493750e07675f90444f80ef2666e3951efa20c54ec7fe48dd7f4e35295754851"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.675321 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.682837 4793 generic.go:334] "Generic (PLEG): container finished" podID="108460a3-822d-405c-a0fa-cdd12ea4123f" containerID="ee9e8a33b4e192ef95cca2fec2e7441dd0a1edfb9cb8f81a28b301e049e2951f" exitCode=0 Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.682994 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" event={"ID":"108460a3-822d-405c-a0fa-cdd12ea4123f","Type":"ContainerDied","Data":"ee9e8a33b4e192ef95cca2fec2e7441dd0a1edfb9cb8f81a28b301e049e2951f"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.683035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" event={"ID":"108460a3-822d-405c-a0fa-cdd12ea4123f","Type":"ContainerStarted","Data":"18aaeeda62e42829161ac2d7ae8e6d20aed4a24d6c48f25d47d1566e341f9a84"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.699936 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.700624 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.701017 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.200985779 +0000 UTC m=+153.189757341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.701860 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-service-ca-bundle\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.700829 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-service-ca-bundle\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.702652 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.702693 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-node-bootstrap-token\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.702709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-apiservice-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703301 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-plugins-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703340 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703366 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-srv-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvrgr\" (UniqueName: \"kubernetes.io/projected/a66e9f9f-4696-49ba-acab-6e131a5efb91-kube-api-access-xvrgr\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4cbx\" (UniqueName: \"kubernetes.io/projected/74772f4c-89ee-4080-9cb4-90ef4170a726-kube-api-access-v4cbx\") pod \"migrator-59844c95c7-dlt29\" (UID: \"74772f4c-89ee-4080-9cb4-90ef4170a726\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703463 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4s7z\" (UniqueName: \"kubernetes.io/projected/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-kube-api-access-m4s7z\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703614 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0965955-3d25-41e9-a3bc-73c19a206418-cert\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703657 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703697 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703720 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703754 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703972 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.703992 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-socket-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704016 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4bq9\" (UniqueName: \"kubernetes.io/projected/09d90176-aba2-4498-a4c4-dc240df81c98-kube-api-access-l4bq9\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704035 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/41617d0a-24b4-47c9-b970-4cbab31285eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704063 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6wv4\" (UniqueName: \"kubernetes.io/projected/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-kube-api-access-n6wv4\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704079 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-registration-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09d90176-aba2-4498-a4c4-dc240df81c98-metrics-tls\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f4s7\" (UniqueName: \"kubernetes.io/projected/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-kube-api-access-2f4s7\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704170 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9m7s\" (UniqueName: \"kubernetes.io/projected/41617d0a-24b4-47c9-b970-4cbab31285eb-kube-api-access-m9m7s\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704205 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dfaef85e-a778-46ff-976e-616a2128a811-signing-cabundle\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704225 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704242 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqv7k\" (UniqueName: \"kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a66e9f9f-4696-49ba-acab-6e131a5efb91-config-volume\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704284 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a66e9f9f-4696-49ba-acab-6e131a5efb91-metrics-tls\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704364 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9mq4\" (UniqueName: \"kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704396 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704414 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kcxs\" (UniqueName: \"kubernetes.io/projected/e0965955-3d25-41e9-a3bc-73c19a206418-kube-api-access-5kcxs\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704451 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-proxy-tls\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704471 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rhjg\" (UniqueName: \"kubernetes.io/projected/cdcb0afe-af56-4b15-b712-5443e772c75d-kube-api-access-5rhjg\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704499 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704523 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-tmpfs\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704569 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldcbw\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-metrics-certs\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704604 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z729h\" (UniqueName: \"kubernetes.io/projected/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-kube-api-access-z729h\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704622 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-certs\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.704639 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7dtg\" (UniqueName: \"kubernetes.io/projected/801796b0-9f86-4104-90bb-1722280f5bfd-kube-api-access-p7dtg\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705233 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-csi-data-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705377 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/801796b0-9f86-4104-90bb-1722280f5bfd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705596 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dfaef85e-a778-46ff-976e-616a2128a811-signing-key\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705630 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705658 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjp9x\" (UniqueName: \"kubernetes.io/projected/0ed71eac-dbf3-4832-add7-58fd93339c41-kube-api-access-wjp9x\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705687 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-stats-auth\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705724 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-webhook-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705743 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-default-certificate\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705761 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705778 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705872 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdr95\" (UniqueName: \"kubernetes.io/projected/685cffa9-987c-4800-b908-d5a6716e25b4-kube-api-access-jdr95\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.705978 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2nr5\" (UniqueName: \"kubernetes.io/projected/dfaef85e-a778-46ff-976e-616a2128a811-kube-api-access-p2nr5\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.706427 4793 generic.go:334] "Generic (PLEG): container finished" podID="1b75b5dd-f846-44d1-b751-5d8241200a89" containerID="c95ab1e0db1bcb0a2b24b20f5a530f5f04ffb796ac1f139715840c6e4cf507fa" exitCode=0 Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.706747 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" event={"ID":"1b75b5dd-f846-44d1-b751-5d8241200a89","Type":"ContainerDied","Data":"c95ab1e0db1bcb0a2b24b20f5a530f5f04ffb796ac1f139715840c6e4cf507fa"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.706956 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" event={"ID":"1b75b5dd-f846-44d1-b751-5d8241200a89","Type":"ContainerStarted","Data":"7cee3695f290c2b3f1586199616cb7e87d39ef0e39a5c1f2a9f0b8913294bf0a"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.706973 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.707743 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.710810 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.711092 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.714768 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.714814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-apiservice-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.716566 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a66e9f9f-4696-49ba-acab-6e131a5efb91-config-volume\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.717433 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-stats-auth\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.717542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dfaef85e-a778-46ff-976e-616a2128a811-signing-cabundle\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.717665 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-node-bootstrap-token\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.717833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-tmpfs\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.718898 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.719089 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.719303 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" event={"ID":"58669a0f-eecb-49dd-9637-af4dc30cd20d","Type":"ContainerStarted","Data":"4ef809a2826a84fc4c6df7b47339752d4935d8cadd1765fb919c6712c00c442f"} Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.725173 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.725862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.727349 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-metrics-certs\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.727962 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.728413 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a66e9f9f-4696-49ba-acab-6e131a5efb91-metrics-tls\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.729578 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.229551987 +0000 UTC m=+153.218323499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.729709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-proxy-tls\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.730442 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dfaef85e-a778-46ff-976e-616a2128a811-signing-key\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.731711 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/801796b0-9f86-4104-90bb-1722280f5bfd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.734222 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.734947 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09d90176-aba2-4498-a4c4-dc240df81c98-metrics-tls\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:17 crc kubenswrapper[4793]: W0126 22:42:17.735070 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb75b68da_88f9_46b9_a8c3_f0fa8cb5e3af.slice/crio-600e0cefd07e26e1d24d321a5c9d90f512f22d28425ade76544087bb950967f5 WatchSource:0}: Error finding container 600e0cefd07e26e1d24d321a5c9d90f512f22d28425ade76544087bb950967f5: Status 404 returned error can't find the container with id 600e0cefd07e26e1d24d321a5c9d90f512f22d28425ade76544087bb950967f5 Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.736379 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.740169 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0ed71eac-dbf3-4832-add7-58fd93339c41-certs\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.744652 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.748026 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-default-certificate\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.751362 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-webhook-cert\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.751706 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.766398 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2nr5\" (UniqueName: \"kubernetes.io/projected/dfaef85e-a778-46ff-976e-616a2128a811-kube-api-access-p2nr5\") pod \"service-ca-9c57cc56f-pj7sl\" (UID: \"dfaef85e-a778-46ff-976e-616a2128a811\") " pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.766454 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/41617d0a-24b4-47c9-b970-4cbab31285eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.767068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdr95\" (UniqueName: \"kubernetes.io/projected/685cffa9-987c-4800-b908-d5a6716e25b4-kube-api-access-jdr95\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.767288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/685cffa9-987c-4800-b908-d5a6716e25b4-srv-cert\") pod \"olm-operator-6b444d44fb-2x6x8\" (UID: \"685cffa9-987c-4800-b908-d5a6716e25b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.775353 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4cbx\" (UniqueName: \"kubernetes.io/projected/74772f4c-89ee-4080-9cb4-90ef4170a726-kube-api-access-v4cbx\") pod \"migrator-59844c95c7-dlt29\" (UID: \"74772f4c-89ee-4080-9cb4-90ef4170a726\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.778872 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.790348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.793683 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2jn5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.793729 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.794248 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvrgr\" (UniqueName: \"kubernetes.io/projected/a66e9f9f-4696-49ba-acab-6e131a5efb91-kube-api-access-xvrgr\") pod \"dns-default-jv28b\" (UID: \"a66e9f9f-4696-49ba-acab-6e131a5efb91\") " pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.807321 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.807694 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.307638581 +0000 UTC m=+153.296410093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.807904 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rhjg\" (UniqueName: \"kubernetes.io/projected/cdcb0afe-af56-4b15-b712-5443e772c75d-kube-api-access-5rhjg\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.807976 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-csi-data-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808117 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-plugins-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0965955-3d25-41e9-a3bc-73c19a206418-cert\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808246 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808284 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808307 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-socket-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-registration-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.808550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kcxs\" (UniqueName: \"kubernetes.io/projected/e0965955-3d25-41e9-a3bc-73c19a206418-kube-api-access-5kcxs\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.809820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-registration-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.811145 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-csi-data-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.812664 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.813696 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-plugins-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.814238 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-socket-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.814584 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cdcb0afe-af56-4b15-b712-5443e772c75d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.814824 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.314796108 +0000 UTC m=+153.303567620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.832984 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0965955-3d25-41e9-a3bc-73c19a206418-cert\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.838294 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9m7s\" (UniqueName: \"kubernetes.io/projected/41617d0a-24b4-47c9-b970-4cbab31285eb-kube-api-access-m9m7s\") pod \"control-plane-machine-set-operator-78cbb6b69f-wbc5d\" (UID: \"41617d0a-24b4-47c9-b970-4cbab31285eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.839018 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqv7k\" (UniqueName: \"kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k\") pod \"marketplace-operator-79b997595-rzprv\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.864444 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.874103 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f4s7\" (UniqueName: \"kubernetes.io/projected/6b2e3c07-cf6d-4c97-b19b-dacc24b947d6-kube-api-access-2f4s7\") pod \"router-default-5444994796-xpkxl\" (UID: \"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6\") " pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.877404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9mq4\" (UniqueName: \"kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4\") pod \"collect-profiles-29491110-fw8kx\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.894390 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.910161 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.910363 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.410332112 +0000 UTC m=+153.399103624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.911582 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: E0126 22:42:17.912444 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.412429265 +0000 UTC m=+153.401200777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.922757 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4s7z\" (UniqueName: \"kubernetes.io/projected/b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff-kube-api-access-m4s7z\") pod \"package-server-manager-789f6589d5-rxzcb\" (UID: \"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.936317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6wv4\" (UniqueName: \"kubernetes.io/projected/33ca38dd-04b2-48d0-8bdd-84b05d96ce92-kube-api-access-n6wv4\") pod \"machine-config-controller-84d6567774-58pvn\" (UID: \"33ca38dd-04b2-48d0-8bdd-84b05d96ce92\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.942133 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2l78j"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.946155 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.979755 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw"] Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.984999 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldcbw\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:17 crc kubenswrapper[4793]: I0126 22:42:17.999509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjp9x\" (UniqueName: \"kubernetes.io/projected/0ed71eac-dbf3-4832-add7-58fd93339c41-kube-api-access-wjp9x\") pod \"machine-config-server-65h9z\" (UID: \"0ed71eac-dbf3-4832-add7-58fd93339c41\") " pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.002035 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4bq9\" (UniqueName: \"kubernetes.io/projected/09d90176-aba2-4498-a4c4-dc240df81c98-kube-api-access-l4bq9\") pod \"dns-operator-744455d44c-6kszs\" (UID: \"09d90176-aba2-4498-a4c4-dc240df81c98\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.007758 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.013397 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.013680 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.513665102 +0000 UTC m=+153.502436614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.014011 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.016572 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z729h\" (UniqueName: \"kubernetes.io/projected/6b5651da-f3c7-41fe-a7b5-c9a054827d3d-kube-api-access-z729h\") pod \"packageserver-d55dfcdfc-cq8pk\" (UID: \"6b5651da-f3c7-41fe-a7b5-c9a054827d3d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.026750 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.048330 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.064242 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7dtg\" (UniqueName: \"kubernetes.io/projected/801796b0-9f86-4104-90bb-1722280f5bfd-kube-api-access-p7dtg\") pod \"multus-admission-controller-857f4d67dd-vn8zr\" (UID: \"801796b0-9f86-4104-90bb-1722280f5bfd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.065001 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.078453 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kcxs\" (UniqueName: \"kubernetes.io/projected/e0965955-3d25-41e9-a3bc-73c19a206418-kube-api-access-5kcxs\") pod \"ingress-canary-rz5xt\" (UID: \"e0965955-3d25-41e9-a3bc-73c19a206418\") " pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:18 crc kubenswrapper[4793]: W0126 22:42:18.081037 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf111a8c3_da3b_48f7_aad2_693c62b93659.slice/crio-69415f202dd286597dad1256eb90d900f2df7eeec809c0056895bde2a1883683 WatchSource:0}: Error finding container 69415f202dd286597dad1256eb90d900f2df7eeec809c0056895bde2a1883683: Status 404 returned error can't find the container with id 69415f202dd286597dad1256eb90d900f2df7eeec809c0056895bde2a1883683 Jan 26 22:42:18 crc kubenswrapper[4793]: W0126 22:42:18.097418 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57535671_8438_44b0_95f3_24679160fb8d.slice/crio-4fa17532c9413ffc9078f4e9d27fc930c3113b0f8205661f9431fbcda9fdb015 WatchSource:0}: Error finding container 4fa17532c9413ffc9078f4e9d27fc930c3113b0f8205661f9431fbcda9fdb015: Status 404 returned error can't find the container with id 4fa17532c9413ffc9078f4e9d27fc930c3113b0f8205661f9431fbcda9fdb015 Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.104254 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.105218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.106848 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rhjg\" (UniqueName: \"kubernetes.io/projected/cdcb0afe-af56-4b15-b712-5443e772c75d-kube-api-access-5rhjg\") pod \"csi-hostpathplugin-5kfbz\" (UID: \"cdcb0afe-af56-4b15-b712-5443e772c75d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.123248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.123744 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.623704186 +0000 UTC m=+153.612475698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.127327 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.129727 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.136761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.157696 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.172131 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-65h9z" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.177727 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rz5xt" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.197991 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.224517 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.224869 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.72483086 +0000 UTC m=+153.713602372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.225048 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.225445 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.725434728 +0000 UTC m=+153.714206240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.279552 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g9vhm"] Jan 26 22:42:18 crc kubenswrapper[4793]: W0126 22:42:18.280934 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode17125aa_eb99_4bad_a99d_44b86be4f09d.slice/crio-52b42dc9fc3fc776aa8343ac648fa9a1880a7934eae4c7a756dfbfced2a3bb5c WatchSource:0}: Error finding container 52b42dc9fc3fc776aa8343ac648fa9a1880a7934eae4c7a756dfbfced2a3bb5c: Status 404 returned error can't find the container with id 52b42dc9fc3fc776aa8343ac648fa9a1880a7934eae4c7a756dfbfced2a3bb5c Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.281897 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2fq8c"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.301148 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gxxfj"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.322526 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.322568 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.325807 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.325924 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.825903612 +0000 UTC m=+153.814675124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.326328 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.327373 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.827350856 +0000 UTC m=+153.816122388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.430168 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.430863 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:18.93084099 +0000 UTC m=+153.919612502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: W0126 22:42:18.435840 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36b0f3df_e65a_41d3_b718_916bd868f437.slice/crio-1eec77a211d972f5524037859ce5fa9b4a5ea78fb3e0c8e9c5d44264a423d2d3 WatchSource:0}: Error finding container 1eec77a211d972f5524037859ce5fa9b4a5ea78fb3e0c8e9c5d44264a423d2d3: Status 404 returned error can't find the container with id 1eec77a211d972f5524037859ce5fa9b4a5ea78fb3e0c8e9c5d44264a423d2d3 Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.455947 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.458568 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.519757 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.531570 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.533899 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.534468 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.034444088 +0000 UTC m=+154.023215600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.538847 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.540937 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhc46" podStartSLOduration=126.540922075 podStartE2EDuration="2m6.540922075s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:18.511906593 +0000 UTC m=+153.500678105" watchObservedRunningTime="2026-01-26 22:42:18.540922075 +0000 UTC m=+153.529693587" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.593759 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podStartSLOduration=126.59370658899999 podStartE2EDuration="2m6.593706589s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:18.569765162 +0000 UTC m=+153.558536674" watchObservedRunningTime="2026-01-26 22:42:18.593706589 +0000 UTC m=+153.582478101" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.597913 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.602066 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pj7sl"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.623032 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq"] Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.634880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.635655 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.135629734 +0000 UTC m=+154.124401246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.635807 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.636423 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.136391587 +0000 UTC m=+154.125163089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.739007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.739672 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.239647115 +0000 UTC m=+154.228418627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.788471 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-65h9z" event={"ID":"0ed71eac-dbf3-4832-add7-58fd93339c41","Type":"ContainerStarted","Data":"e24638c37778163b679e0bdfbdfc3278e0eed094ccce1232b194269a769f6999"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.823975 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" podStartSLOduration=126.823952457 podStartE2EDuration="2m6.823952457s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:18.823758851 +0000 UTC m=+153.812530363" watchObservedRunningTime="2026-01-26 22:42:18.823952457 +0000 UTC m=+153.812723959" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.825009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" event={"ID":"58669a0f-eecb-49dd-9637-af4dc30cd20d","Type":"ContainerStarted","Data":"536ca99918fb3552cb94df324b5d23422c69b27ef2ccfca8c0fb91fe19520070"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.826626 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2jn5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.826696 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.830503 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" event={"ID":"1b75b5dd-f846-44d1-b751-5d8241200a89","Type":"ContainerStarted","Data":"4681f81b6f33ccab284c7ff4b9370dd801615735b68d9e0f3874526865782d3a"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.832007 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" event={"ID":"ea3e8958-c4f8-41bc-b5ca-6be701416ea7","Type":"ContainerStarted","Data":"c4ba84741222ce1cb6218d2904fc86045acbac92cff11fc0bce82595e39aa018"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.834868 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" event={"ID":"f6f869ac-7928-4749-9ba7-04ec01b48bc0","Type":"ContainerStarted","Data":"1cd8a394c15ddb672f635ba1fce8177e9bcf93b89ea00e9148005262ef5cd3ec"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.839977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.840944 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57ccj" event={"ID":"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd","Type":"ContainerStarted","Data":"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.840971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57ccj" event={"ID":"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd","Type":"ContainerStarted","Data":"fc7e45cc09b7e136cabcbaac23e466557070638bb704168bde4eed70e8068d9f"} Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.841420 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.341407648 +0000 UTC m=+154.330179160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.846974 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" event={"ID":"36b0f3df-e65a-41d3-b718-916bd868f437","Type":"ContainerStarted","Data":"1eec77a211d972f5524037859ce5fa9b4a5ea78fb3e0c8e9c5d44264a423d2d3"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.862804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" event={"ID":"e17125aa-eb99-4bad-a99d-44b86be4f09d","Type":"ContainerStarted","Data":"52b42dc9fc3fc776aa8343ac648fa9a1880a7934eae4c7a756dfbfced2a3bb5c"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.867835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" event={"ID":"a7012682-3c72-4541-ae5b-5c1522508f39","Type":"ContainerStarted","Data":"e05a8000f8913cd20aa6207c0cc680fa4913ef20af8843ed84f893383ad64e8d"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.867871 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" event={"ID":"a7012682-3c72-4541-ae5b-5c1522508f39","Type":"ContainerStarted","Data":"f8ba1ce666061f394119cd3ca7ef436f6b73f9e93cc1b1446bb221e11e9a0f62"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.904018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" event={"ID":"6ae9ab97-d0d9-4ba4-a842-fa74943633ba","Type":"ContainerStarted","Data":"b778976adac39956f1fcc61d6051ee163ea68f2f24e511f41050506715496d6e"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.919281 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" event={"ID":"4f027876-8332-446c-9ea2-29d38ba7fcfc","Type":"ContainerStarted","Data":"679134a08020f50eeb319b148f1fe1687fda64c50ef9bfa6369611cdec7e0d8a"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.944899 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.945087 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.445058818 +0000 UTC m=+154.433830330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.945994 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:18 crc kubenswrapper[4793]: E0126 22:42:18.946605 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.446597354 +0000 UTC m=+154.435368866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.958997 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" event={"ID":"81bdaed4-2088-404f-a937-9a682635b5ab","Type":"ContainerStarted","Data":"4c287b0751d0f15110c2d23d47e37e261c02fff7d2870afeed6dd4327bc02d9c"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.962381 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" event={"ID":"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d","Type":"ContainerStarted","Data":"84a12e8194ec14c3637645ecc9335c8343191862b3d5fb8c20784e16155623ec"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.964787 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" event={"ID":"2f225855-c898-4519-96fb-c0556fb46513","Type":"ContainerStarted","Data":"357aefffba60b96fbe621f24bee5a12cce70876bcc4866051f6a70df07eaa74d"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.975523 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2l78j" event={"ID":"57535671-8438-44b0-95f3-24679160fb8d","Type":"ContainerStarted","Data":"4fa17532c9413ffc9078f4e9d27fc930c3113b0f8205661f9431fbcda9fdb015"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.976271 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.977067 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-2l78j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.977118 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2l78j" podUID="57535671-8438-44b0-95f3-24679160fb8d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 26 22:42:18 crc kubenswrapper[4793]: W0126 22:42:18.982954 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod685cffa9_987c_4800_b908_d5a6716e25b4.slice/crio-e64f2b3697432f8b979ac5ec40f097022652d964e068a23052ddd67e47ab10d5 WatchSource:0}: Error finding container e64f2b3697432f8b979ac5ec40f097022652d964e068a23052ddd67e47ab10d5: Status 404 returned error can't find the container with id e64f2b3697432f8b979ac5ec40f097022652d964e068a23052ddd67e47ab10d5 Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.987020 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" event={"ID":"108460a3-822d-405c-a0fa-cdd12ea4123f","Type":"ContainerStarted","Data":"9c173340ca8ab3e87c8f7274f668f8cd11fd21616f5273abc5154e4e68c1710c"} Jan 26 22:42:18 crc kubenswrapper[4793]: I0126 22:42:18.994035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" event={"ID":"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e","Type":"ContainerStarted","Data":"a11e07e424226e6cab2d8951e32fbb129b6f1fdac3d1833d57da18499dc8cfb4"} Jan 26 22:42:19 crc kubenswrapper[4793]: W0126 22:42:19.011896 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c808b47_7d86_4456_ae60_cb83d2a58262.slice/crio-4ae0d866478e3ef22a55a211528abf41b5bf2ee1674884112cc864b85c65d485 WatchSource:0}: Error finding container 4ae0d866478e3ef22a55a211528abf41b5bf2ee1674884112cc864b85c65d485: Status 404 returned error can't find the container with id 4ae0d866478e3ef22a55a211528abf41b5bf2ee1674884112cc864b85c65d485 Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.014309 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" event={"ID":"f111a8c3-da3b-48f7-aad2-693c62b93659","Type":"ContainerStarted","Data":"69415f202dd286597dad1256eb90d900f2df7eeec809c0056895bde2a1883683"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.029370 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xpkxl" event={"ID":"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6","Type":"ContainerStarted","Data":"112c222b8cb653648bcb4693b6aa55f055488e71c0c9795404beae8f592dd5d8"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.039800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" event={"ID":"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec","Type":"ContainerStarted","Data":"3924b8750b2e795e17b40b9fb7874af95661a4a181beed310c4b6d32b998c205"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.039850 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" event={"ID":"3d20f3c1-8995-4eb3-9c83-3219e7ad35ec","Type":"ContainerStarted","Data":"914d04d56feb7a9c5ead7fb33b39b2c046c63eb407160d2af49d6084ec87388a"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.041105 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jv28b"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.044987 4793 generic.go:334] "Generic (PLEG): container finished" podID="b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af" containerID="f8c13020df21bae69d5cfc42233d948fb82e3b070172af0a5386752ffd58c8df" exitCode=0 Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.045428 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" event={"ID":"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af","Type":"ContainerDied","Data":"f8c13020df21bae69d5cfc42233d948fb82e3b070172af0a5386752ffd58c8df"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.045505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" event={"ID":"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af","Type":"ContainerStarted","Data":"600e0cefd07e26e1d24d321a5c9d90f512f22d28425ade76544087bb950967f5"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.049712 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.052365 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.552346388 +0000 UTC m=+154.541117900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.052555 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" event={"ID":"614c35d6-ed0a-4b6d-9241-6df532fa9528","Type":"ContainerStarted","Data":"ba8f3925b54ba7ffb434a8e44462ed039b5a860c5a16d87b7ec57cf9c726e9f3"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.053934 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" event={"ID":"614c35d6-ed0a-4b6d-9241-6df532fa9528","Type":"ContainerStarted","Data":"ae8900e7b166d622b4268c24095477d96c34b6314803c9e1cf2a2165ba8556f6"} Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.073765 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.103044 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.154571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.160318 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.660269958 +0000 UTC m=+154.649041470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.190353 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.195616 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kszs"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.255092 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.255390 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.755367978 +0000 UTC m=+154.744139490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.310411 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.316232 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.318067 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vn8zr"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.341165 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfbz"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.349941 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.356650 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.357122 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.85709947 +0000 UTC m=+154.845870982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.438421 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rz5xt"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.440254 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.451987 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk"] Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.459396 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.459746 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:19.95973162 +0000 UTC m=+154.948503132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.561409 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.561854 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.061831122 +0000 UTC m=+155.050602635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.662618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.662790 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.16276263 +0000 UTC m=+155.151534142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.662891 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.663302 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.163294236 +0000 UTC m=+155.152065748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.764282 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.764495 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.26445052 +0000 UTC m=+155.253222032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.764662 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.765055 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.265023128 +0000 UTC m=+155.253794650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.865413 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.865615 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.365586224 +0000 UTC m=+155.354357736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.866004 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.866401 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.366390119 +0000 UTC m=+155.355161631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.886970 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h84t5" podStartSLOduration=127.886955734 podStartE2EDuration="2m7.886955734s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:19.886007195 +0000 UTC m=+154.874778717" watchObservedRunningTime="2026-01-26 22:42:19.886955734 +0000 UTC m=+154.875727246" Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.908979 4793 csr.go:261] certificate signing request csr-zv74z is approved, waiting to be issued Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.925787 4793 csr.go:257] certificate signing request csr-zv74z is issued Jan 26 22:42:19 crc kubenswrapper[4793]: I0126 22:42:19.966627 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:19 crc kubenswrapper[4793]: E0126 22:42:19.966878 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.466864892 +0000 UTC m=+155.455636394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.000344 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" podStartSLOduration=128.000323719 podStartE2EDuration="2m8.000323719s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:19.998491543 +0000 UTC m=+154.987263065" watchObservedRunningTime="2026-01-26 22:42:20.000323719 +0000 UTC m=+154.989095231" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.058461 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-99f2s" podStartSLOduration=128.058444375 podStartE2EDuration="2m8.058444375s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.056319601 +0000 UTC m=+155.045091113" watchObservedRunningTime="2026-01-26 22:42:20.058444375 +0000 UTC m=+155.047215887" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.058994 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-57ccj" podStartSLOduration=128.058988012 podStartE2EDuration="2m8.058988012s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.030847657 +0000 UTC m=+155.019619169" watchObservedRunningTime="2026-01-26 22:42:20.058988012 +0000 UTC m=+155.047759524" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.069821 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.070694 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.570680567 +0000 UTC m=+155.559452079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.084630 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" event={"ID":"f111a8c3-da3b-48f7-aad2-693c62b93659","Type":"ContainerStarted","Data":"1dd1f24f90a7871125b38db0808a6ec03e999cfdc8e6c92a2abf285b05d72742"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.106429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" event={"ID":"dfaef85e-a778-46ff-976e-616a2128a811","Type":"ContainerStarted","Data":"2bb368465f2dda10e5411dffd6581c5e9f6aa711f84c58e5ece12f848645e16a"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.106496 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" event={"ID":"dfaef85e-a778-46ff-976e-616a2128a811","Type":"ContainerStarted","Data":"d8e4799f6fe3f347fc39ed8ded3eaa463b2f88336348453e3deb237201bef022"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.119124 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" event={"ID":"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d","Type":"ContainerStarted","Data":"519194ab2b1b2d6ee349e477986d2db39f0a990a30d601aac2530787a39d4528"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.121714 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" event={"ID":"41617d0a-24b4-47c9-b970-4cbab31285eb","Type":"ContainerStarted","Data":"09d0eac78f32807cfcc281aa405a2ac286813a6d91d2120754532d6910b0b1d1"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.125738 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" event={"ID":"e17125aa-eb99-4bad-a99d-44b86be4f09d","Type":"ContainerStarted","Data":"fda2f5b52038cfc33503a309170e245bf286f4bab1007f8d89f6a02e8239cdfe"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.125796 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.133379 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lvnpc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.133449 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.171460 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.172439 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.672410709 +0000 UTC m=+155.661182221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.183586 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" event={"ID":"cdcb0afe-af56-4b15-b712-5443e772c75d","Type":"ContainerStarted","Data":"728afac09b64a24556094731b0bc6273dc13c7ef835884946b96b913198b3f8e"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.187564 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2l78j" podStartSLOduration=128.187543289 podStartE2EDuration="2m8.187543289s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.185070914 +0000 UTC m=+155.173842436" watchObservedRunningTime="2026-01-26 22:42:20.187543289 +0000 UTC m=+155.176314801" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.187862 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8bfn" podStartSLOduration=128.187857338 podStartE2EDuration="2m8.187857338s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.142307194 +0000 UTC m=+155.131078706" watchObservedRunningTime="2026-01-26 22:42:20.187857338 +0000 UTC m=+155.176628850" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.216039 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2l78j" event={"ID":"57535671-8438-44b0-95f3-24679160fb8d","Type":"ContainerStarted","Data":"c5771c6c061f963913b73a7dc637cfade05842093c91eba1f5e6b9f2a68298f5"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.216981 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-2l78j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.217016 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2l78j" podUID="57535671-8438-44b0-95f3-24679160fb8d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.220958 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-msvjw" podStartSLOduration=128.220934314 podStartE2EDuration="2m8.220934314s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.215451197 +0000 UTC m=+155.204222709" watchObservedRunningTime="2026-01-26 22:42:20.220934314 +0000 UTC m=+155.209705826" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.240162 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" event={"ID":"ea3e8958-c4f8-41bc-b5ca-6be701416ea7","Type":"ContainerStarted","Data":"4884e01494e13e16506fd3435514f3207e0f7b8263f94afc51565a1f3d87f275"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.244244 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" event={"ID":"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c","Type":"ContainerStarted","Data":"57f0beeea5e3ec1513ec4de6915e931c6ec5ae086fb38fe05c2d20a971915d31"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.244265 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" event={"ID":"cb818ec8-cfe5-4b8b-9b25-9d10ed05cb6c","Type":"ContainerStarted","Data":"9102b23668ac789ae3ef4986aab063c6c62002b9ff73c4f66dbf881941955b3c"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.273706 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.274155 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.774138341 +0000 UTC m=+155.762909853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.281917 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" event={"ID":"108460a3-822d-405c-a0fa-cdd12ea4123f","Type":"ContainerStarted","Data":"28cfe07dd43a3ebb532b016f17952bf0c5719887ac6a2caec5afc790fff10381"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.298033 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" podStartSLOduration=128.298008976 podStartE2EDuration="2m8.298008976s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.26195294 +0000 UTC m=+155.250724462" watchObservedRunningTime="2026-01-26 22:42:20.298008976 +0000 UTC m=+155.286780498" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.308592 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xpkxl" event={"ID":"6b2e3c07-cf6d-4c97-b19b-dacc24b947d6","Type":"ContainerStarted","Data":"43b8679abe3ec73a9e35eb062d7545ae5d61f7f2c37f26adf9c7a394bc5375fc"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.319974 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ml4kf" podStartSLOduration=128.319956123 podStartE2EDuration="2m8.319956123s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.300350717 +0000 UTC m=+155.289122229" watchObservedRunningTime="2026-01-26 22:42:20.319956123 +0000 UTC m=+155.308727645" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.335977 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mb5p8" podStartSLOduration=128.335953659 podStartE2EDuration="2m8.335953659s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.329912426 +0000 UTC m=+155.318683938" watchObservedRunningTime="2026-01-26 22:42:20.335953659 +0000 UTC m=+155.324725191" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.338080 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" event={"ID":"74772f4c-89ee-4080-9cb4-90ef4170a726","Type":"ContainerStarted","Data":"4fe5fac61d8bcbaad8682bd09bfd4ba1ef2dd38e8e4420b580f4e94df196e726"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.348594 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" event={"ID":"f6f869ac-7928-4749-9ba7-04ec01b48bc0","Type":"ContainerStarted","Data":"56df71948f04df090d995ba09e86f6fa531a10e0a9572412a5b855b49cd6ba7b"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.349742 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.356326 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-g9vhm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.356383 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" podUID="f6f869ac-7928-4749-9ba7-04ec01b48bc0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.375021 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.375595 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.875567873 +0000 UTC m=+155.864339375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.376790 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xpkxl" podStartSLOduration=128.3767657 podStartE2EDuration="2m8.3767657s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.374856362 +0000 UTC m=+155.363627874" watchObservedRunningTime="2026-01-26 22:42:20.3767657 +0000 UTC m=+155.365537212" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.379548 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" event={"ID":"f7779d32-d1d6-4e24-b59e-04461b1021c3","Type":"ContainerStarted","Data":"7bb09463affe65b8c8341ff8758a1535e4d3942375b31382f2863df221ac542b"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.389852 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" event={"ID":"33ca38dd-04b2-48d0-8bdd-84b05d96ce92","Type":"ContainerStarted","Data":"7fa4766f4e560eff9395dce8f1434196a478bb1db88308694f733caf8cce5825"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.394816 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" event={"ID":"4f027876-8332-446c-9ea2-29d38ba7fcfc","Type":"ContainerStarted","Data":"c78299e65e90b02694396d058c02447fefca5b08a61bbcd41705090715346153"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.395389 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.396648 4793 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r4dq5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.396706 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" podUID="4f027876-8332-446c-9ea2-29d38ba7fcfc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.404057 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" event={"ID":"1c808b47-7d86-4456-ae60-cb83d2a58262","Type":"ContainerStarted","Data":"4afc3787d038b7f211e9f1794499bc7f0221a55d43785fac9fcc416505f09c3d"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.404100 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" event={"ID":"1c808b47-7d86-4456-ae60-cb83d2a58262","Type":"ContainerStarted","Data":"4ae0d866478e3ef22a55a211528abf41b5bf2ee1674884112cc864b85c65d485"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.407986 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rz5xt" event={"ID":"e0965955-3d25-41e9-a3bc-73c19a206418","Type":"ContainerStarted","Data":"88a9796495583d25b8e5202eee2269e353e6d66e13a0d8b1dbad34d9e341cf0c"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.411309 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" event={"ID":"5212b17b-4423-4662-b026-88d37b8e6780","Type":"ContainerStarted","Data":"f22596edcefced17885ef893f7cee49907f5d9cebcaf8c8793ed7795985ac763"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.417818 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" event={"ID":"6b5651da-f3c7-41fe-a7b5-c9a054827d3d","Type":"ContainerStarted","Data":"55eda6eff93920d03c6df4418723ce4588f89ac26d518b5f05d86a604f901b24"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.431389 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" podStartSLOduration=128.431359459 podStartE2EDuration="2m8.431359459s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.430514663 +0000 UTC m=+155.419286175" watchObservedRunningTime="2026-01-26 22:42:20.431359459 +0000 UTC m=+155.420130971" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.451596 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" event={"ID":"801796b0-9f86-4104-90bb-1722280f5bfd","Type":"ContainerStarted","Data":"6464863e6fa11f4ee0ad9cc3266a4177a63cbc37916e8c319133579f29ccaf25"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.467923 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" podStartSLOduration=128.467900989 podStartE2EDuration="2m8.467900989s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.464366302 +0000 UTC m=+155.453137814" watchObservedRunningTime="2026-01-26 22:42:20.467900989 +0000 UTC m=+155.456672501" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.477526 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.480745 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:20.980731009 +0000 UTC m=+155.969502521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.494754 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" podStartSLOduration=128.494729675 podStartE2EDuration="2m8.494729675s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.49458184 +0000 UTC m=+155.483353352" watchObservedRunningTime="2026-01-26 22:42:20.494729675 +0000 UTC m=+155.483501187" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.508258 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" event={"ID":"b75b68da-88f9-46b9-a8c3-f0fa8cb5e3af","Type":"ContainerStarted","Data":"a0eff1dbbeb513457356d440c74f85eac8f3d7002b6df959f9b56edd914966fe"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.508562 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.512700 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" event={"ID":"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff","Type":"ContainerStarted","Data":"e22d95b1378ed93b62d43bd40d54a953419eccba50bffab7aee56f9d82b494ce"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.521106 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" event={"ID":"36b0f3df-e65a-41d3-b718-916bd868f437","Type":"ContainerStarted","Data":"06b092ac2aa030f2688f72aec8bbe2d5905e42f75feda5dbced981dfc4885ce9"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.527557 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" event={"ID":"614c35d6-ed0a-4b6d-9241-6df532fa9528","Type":"ContainerStarted","Data":"9cd4ef78972a667059c90bb468eb0a0fb8e5fc18225cff19e7b7f03070ecb96e"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.531947 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" event={"ID":"2f225855-c898-4519-96fb-c0556fb46513","Type":"ContainerStarted","Data":"2711ba92af789bbdf6da58ab3dbf6431628bef7878559a17a7461ad1b0472787"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.532847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" event={"ID":"81bdaed4-2088-404f-a937-9a682635b5ab","Type":"ContainerStarted","Data":"2f0686c483eef76044cf59a0a38731780840bc9e0cdf353692e5b681dc3a63a7"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.533750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jv28b" event={"ID":"a66e9f9f-4696-49ba-acab-6e131a5efb91","Type":"ContainerStarted","Data":"b96a8b619b88ff3b2d45d955f1320c704ca8047e2077990c463545927b3ecd57"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.534498 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" event={"ID":"09d90176-aba2-4498-a4c4-dc240df81c98","Type":"ContainerStarted","Data":"afaf4d3ad878b43967e67d389c33620a46fa5185b00f228e1d88d8d33e509f7f"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.542467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" event={"ID":"685cffa9-987c-4800-b908-d5a6716e25b4","Type":"ContainerStarted","Data":"e97e7bf1b91c0a433c1f12c8271871780da24bc3beda87bc27b40f2d8f236632"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.542535 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" event={"ID":"685cffa9-987c-4800-b908-d5a6716e25b4","Type":"ContainerStarted","Data":"e64f2b3697432f8b979ac5ec40f097022652d964e068a23052ddd67e47ab10d5"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.544105 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" podStartSLOduration=128.544088645 podStartE2EDuration="2m8.544088645s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.541154096 +0000 UTC m=+155.529925608" watchObservedRunningTime="2026-01-26 22:42:20.544088645 +0000 UTC m=+155.532860157" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.544232 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.545643 4793 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2x6x8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.546123 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" podUID="685cffa9-987c-4800-b908-d5a6716e25b4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.546438 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" event={"ID":"1d457a27-1fbc-43fd-81b8-3d0b1e495f0e","Type":"ContainerStarted","Data":"fc3e48629f984ecf6485f8835f3823d2582d35f6c8dadefc7306d998227d1e2d"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.551700 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-65h9z" event={"ID":"0ed71eac-dbf3-4832-add7-58fd93339c41","Type":"ContainerStarted","Data":"1de510e62a9ceea378f43335e071a9e3a95f69916d181b873578be79ff8007c0"} Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.556872 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2jn5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.556917 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.579620 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.580720 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.080700598 +0000 UTC m=+156.069472110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.586465 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" podStartSLOduration=128.586416491 podStartE2EDuration="2m8.586416491s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.58375528 +0000 UTC m=+155.572526802" watchObservedRunningTime="2026-01-26 22:42:20.586416491 +0000 UTC m=+155.575188003" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.674851 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" podStartSLOduration=128.674832318 podStartE2EDuration="2m8.674832318s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.615567417 +0000 UTC m=+155.604338929" watchObservedRunningTime="2026-01-26 22:42:20.674832318 +0000 UTC m=+155.663603830" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.676491 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qmgss" podStartSLOduration=128.676481909 podStartE2EDuration="2m8.676481909s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.674818248 +0000 UTC m=+155.663589760" watchObservedRunningTime="2026-01-26 22:42:20.676481909 +0000 UTC m=+155.665253421" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.683056 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.692931 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.192895777 +0000 UTC m=+156.181667289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.704570 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gxxfj" podStartSLOduration=128.704549792 podStartE2EDuration="2m8.704549792s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.70152794 +0000 UTC m=+155.690299452" watchObservedRunningTime="2026-01-26 22:42:20.704549792 +0000 UTC m=+155.693321304" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.733931 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bscpb" podStartSLOduration=128.733911034 podStartE2EDuration="2m8.733911034s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.730783859 +0000 UTC m=+155.719555361" watchObservedRunningTime="2026-01-26 22:42:20.733911034 +0000 UTC m=+155.722682546" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.770626 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bcdkj" podStartSLOduration=128.770607219 podStartE2EDuration="2m8.770607219s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.770049372 +0000 UTC m=+155.758820884" watchObservedRunningTime="2026-01-26 22:42:20.770607219 +0000 UTC m=+155.759378731" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.784020 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.784461 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.284417329 +0000 UTC m=+156.273188841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.823098 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-65h9z" podStartSLOduration=6.823080064 podStartE2EDuration="6.823080064s" podCreationTimestamp="2026-01-26 22:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.822424564 +0000 UTC m=+155.811196066" watchObservedRunningTime="2026-01-26 22:42:20.823080064 +0000 UTC m=+155.811851576" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.885782 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:20 crc kubenswrapper[4793]: E0126 22:42:20.886117 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.386101359 +0000 UTC m=+156.374872871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.929511 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 22:37:19 +0000 UTC, rotation deadline is 2026-12-19 02:28:12.970852691 +0000 UTC Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.929574 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7827h45m52.041280301s for next certificate rotation Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.953074 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j8jkh" podStartSLOduration=128.953050544 podStartE2EDuration="2m8.953050544s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.87921269 +0000 UTC m=+155.867984202" watchObservedRunningTime="2026-01-26 22:42:20.953050544 +0000 UTC m=+155.941822056" Jan 26 22:42:20 crc kubenswrapper[4793]: I0126 22:42:20.958515 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" podStartSLOduration=128.958490639 podStartE2EDuration="2m8.958490639s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:20.95027189 +0000 UTC m=+155.939043402" watchObservedRunningTime="2026-01-26 22:42:20.958490639 +0000 UTC m=+155.947262161" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.000683 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.001989 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.501925509 +0000 UTC m=+156.490697021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.028150 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.042612 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.042673 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.102023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.102418 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.602402453 +0000 UTC m=+156.591173955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.202918 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.203285 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.703263568 +0000 UTC m=+156.692035080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.304127 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.304675 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.80465523 +0000 UTC m=+156.793426742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.405641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.405955 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:21.905939368 +0000 UTC m=+156.894710880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.506995 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.507338 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.007326529 +0000 UTC m=+156.996098031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.561975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" event={"ID":"426a3518-6fb9-4c1a-ab27-5a6c6222cd2d","Type":"ContainerStarted","Data":"3bd4e60ea831be7abd97ac89ea034c13f105afd80746f88f9f49b63d88e5999a"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.563862 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" event={"ID":"6b5651da-f3c7-41fe-a7b5-c9a054827d3d","Type":"ContainerStarted","Data":"bc8bb0ed30eb61eee3e45b048ba9ec50fe5d2e13dec7f95c26677cae2edd6de2"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.564055 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.565627 4793 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cq8pk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.565733 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" podUID="6b5651da-f3c7-41fe-a7b5-c9a054827d3d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.566968 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" event={"ID":"33ca38dd-04b2-48d0-8bdd-84b05d96ce92","Type":"ContainerStarted","Data":"6877adb4fff2b2005c12cea00d006616ca607c0fdf5d2cafde12dc0592e0a74a"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.567059 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" event={"ID":"33ca38dd-04b2-48d0-8bdd-84b05d96ce92","Type":"ContainerStarted","Data":"e655827dc2c63c02fba1cc49c1cadd82c927c20af99d9ff83efb3b8463998f08"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.574386 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" event={"ID":"09d90176-aba2-4498-a4c4-dc240df81c98","Type":"ContainerStarted","Data":"8e5ad5f9d5e3843e851452467a4800702fc19c71899d8a1fc5b382544172bc01"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.574412 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" event={"ID":"09d90176-aba2-4498-a4c4-dc240df81c98","Type":"ContainerStarted","Data":"28641c7ca3fafc914ce53eb4655beb6491cd6920276685048b21368be95f372c"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.577675 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" event={"ID":"74772f4c-89ee-4080-9cb4-90ef4170a726","Type":"ContainerStarted","Data":"d7beae1c5747060aeaf943d138229b6f01f655f7f86be4b0a08f193d3171e3bb"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.577713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" event={"ID":"74772f4c-89ee-4080-9cb4-90ef4170a726","Type":"ContainerStarted","Data":"0364f5dd1eafec3d4ee871f892045c391e7c880ee6d381f83ac1eb8fc808a64e"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.585111 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" event={"ID":"801796b0-9f86-4104-90bb-1722280f5bfd","Type":"ContainerStarted","Data":"5fbc6c1ccd75631bac630ab8168d1c1626fc39d28efb94cc31bf7c3d36367ced"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.585149 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" event={"ID":"801796b0-9f86-4104-90bb-1722280f5bfd","Type":"ContainerStarted","Data":"62cbe197a07b1c376d461a4867a8ee98a05e891460e13a940e94cbb2b0b175a3"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.589258 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" event={"ID":"41617d0a-24b4-47c9-b970-4cbab31285eb","Type":"ContainerStarted","Data":"f3b612e32782d4ebb580b253a030066305d33224313a8b405c20be51e735b1dd"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.591037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2fq8c" event={"ID":"6ae9ab97-d0d9-4ba4-a842-fa74943633ba","Type":"ContainerStarted","Data":"8390891d65c9980d57ef61ca061825430898e9b9c2d828c085528cc1a3d392ac"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.592854 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" event={"ID":"1c808b47-7d86-4456-ae60-cb83d2a58262","Type":"ContainerStarted","Data":"6810cc196bd85fa52b4601661a78bd12a58347761fe24a47372bf619f9e11108"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.594320 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" event={"ID":"f7779d32-d1d6-4e24-b59e-04461b1021c3","Type":"ContainerStarted","Data":"0261db876c5107325c1d28c55b554ed6cd68dfefffa0d5f854c959425c4e8325"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.594963 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.596379 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rzprv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.596428 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.598118 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jv28b" event={"ID":"a66e9f9f-4696-49ba-acab-6e131a5efb91","Type":"ContainerStarted","Data":"12eaec58684506bfdd5708acbdc52c55e0218650ecf9f7b1552ac1290fed9bbc"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.598231 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jv28b" event={"ID":"a66e9f9f-4696-49ba-acab-6e131a5efb91","Type":"ContainerStarted","Data":"bbabf566bf217a3d17afe75f42fb0faf48402c454476aa0454b3d3285add9bf7"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.598347 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.601107 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" event={"ID":"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff","Type":"ContainerStarted","Data":"bef202b41bf024129969f25ad174b141d18380e33c8d445fca173d1b2e867e9a"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.601232 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" event={"ID":"b1fdd043-ab03-4c7e-9a44-6b50b7fb97ff","Type":"ContainerStarted","Data":"7a1e3ad2fa950130846cfb51fec53ed3c7c45f6e8da4c768b678ab2e3e85764c"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.601422 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.602473 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rz5xt" event={"ID":"e0965955-3d25-41e9-a3bc-73c19a206418","Type":"ContainerStarted","Data":"f4226376128ca3b6da6cd286fae5c0b51c008aca2460a5c55addedd023dd631d"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.604695 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" event={"ID":"5212b17b-4423-4662-b026-88d37b8e6780","Type":"ContainerStarted","Data":"cc61868d059ab224be57eb0623ed03f00405cf2898a129989b50ff353c392019"} Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.605411 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-2l78j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606139 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2l78j" podUID="57535671-8438-44b0-95f3-24679160fb8d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.605570 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-g9vhm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.605893 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lvnpc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606372 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606331 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" podUID="f6f869ac-7928-4749-9ba7-04ec01b48bc0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606082 4793 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2x6x8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606437 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" podUID="685cffa9-987c-4800-b908-d5a6716e25b4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606582 4793 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r4dq5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.606666 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" podUID="4f027876-8332-446c-9ea2-29d38ba7fcfc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.607300 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5mjq6" podStartSLOduration=129.607285257 podStartE2EDuration="2m9.607285257s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.60572337 +0000 UTC m=+156.594494892" watchObservedRunningTime="2026-01-26 22:42:21.607285257 +0000 UTC m=+156.596056769" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.607487 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.607758 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.107742631 +0000 UTC m=+157.096514143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.633380 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wbc5d" podStartSLOduration=129.63335892999999 podStartE2EDuration="2m9.63335893s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.633152953 +0000 UTC m=+156.621924465" watchObservedRunningTime="2026-01-26 22:42:21.63335893 +0000 UTC m=+156.622130442" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.702863 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" podStartSLOduration=129.702845981 podStartE2EDuration="2m9.702845981s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.676369497 +0000 UTC m=+156.665141009" watchObservedRunningTime="2026-01-26 22:42:21.702845981 +0000 UTC m=+156.691617493" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.704861 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-58pvn" podStartSLOduration=129.704853302 podStartE2EDuration="2m9.704853302s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.700765628 +0000 UTC m=+156.689537140" watchObservedRunningTime="2026-01-26 22:42:21.704853302 +0000 UTC m=+156.693624814" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.710610 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.712798 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.212783104 +0000 UTC m=+157.201554616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.713998 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.714041 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.717070 4793 patch_prober.go:28] interesting pod/apiserver-76f77b778f-nd4pl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.717127 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" podUID="108460a3-822d-405c-a0fa-cdd12ea4123f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.744283 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.744612 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.752431 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vn8zr" podStartSLOduration=129.752419668 podStartE2EDuration="2m9.752419668s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.750054886 +0000 UTC m=+156.738826398" watchObservedRunningTime="2026-01-26 22:42:21.752419668 +0000 UTC m=+156.741191180" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.812774 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.813045 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.31302793 +0000 UTC m=+157.301799442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.841414 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" podStartSLOduration=129.841392392 podStartE2EDuration="2m9.841392392s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.806726449 +0000 UTC m=+156.795497971" watchObservedRunningTime="2026-01-26 22:42:21.841392392 +0000 UTC m=+156.830163904" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.842310 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jv28b" podStartSLOduration=7.84230556 podStartE2EDuration="7.84230556s" podCreationTimestamp="2026-01-26 22:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.84065584 +0000 UTC m=+156.829427352" watchObservedRunningTime="2026-01-26 22:42:21.84230556 +0000 UTC m=+156.831077072" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.880355 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" podStartSLOduration=129.880337096 podStartE2EDuration="2m9.880337096s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.879545552 +0000 UTC m=+156.868317054" watchObservedRunningTime="2026-01-26 22:42:21.880337096 +0000 UTC m=+156.869108608" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.911965 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rz5xt" podStartSLOduration=7.911942706 podStartE2EDuration="7.911942706s" podCreationTimestamp="2026-01-26 22:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.910306627 +0000 UTC m=+156.899078139" watchObservedRunningTime="2026-01-26 22:42:21.911942706 +0000 UTC m=+156.900714218" Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.915024 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:21 crc kubenswrapper[4793]: E0126 22:42:21.915990 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.415969849 +0000 UTC m=+157.404741361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:21 crc kubenswrapper[4793]: I0126 22:42:21.993634 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrmwq" podStartSLOduration=129.993615577 podStartE2EDuration="2m9.993615577s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:21.95357089 +0000 UTC m=+156.942342412" watchObservedRunningTime="2026-01-26 22:42:21.993615577 +0000 UTC m=+156.982387089" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.017734 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.018095 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.518077201 +0000 UTC m=+157.506848713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.039371 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:22 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:22 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:22 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.039443 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.072821 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6kszs" podStartSLOduration=130.072795754 podStartE2EDuration="2m10.072795754s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:22.00257681 +0000 UTC m=+156.991348322" watchObservedRunningTime="2026-01-26 22:42:22.072795754 +0000 UTC m=+157.061567266" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.120597 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.120979 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.620966748 +0000 UTC m=+157.609738260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.123477 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pj7sl" podStartSLOduration=130.123453173 podStartE2EDuration="2m10.123453173s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:22.121328709 +0000 UTC m=+157.110100221" watchObservedRunningTime="2026-01-26 22:42:22.123453173 +0000 UTC m=+157.112224685" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.123860 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlt29" podStartSLOduration=130.123854986 podStartE2EDuration="2m10.123854986s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:22.076536798 +0000 UTC m=+157.065308310" watchObservedRunningTime="2026-01-26 22:42:22.123854986 +0000 UTC m=+157.112626498" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.191235 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.222238 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.222554 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.722511604 +0000 UTC m=+157.711283116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.222834 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.223314 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.723307058 +0000 UTC m=+157.712078570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.323824 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.324040 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.824005699 +0000 UTC m=+157.812777211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.324182 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.324566 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.824547905 +0000 UTC m=+157.813319417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.425951 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.426181 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.926142563 +0000 UTC m=+157.914914075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.426414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.426826 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:22.926793962 +0000 UTC m=+157.915565474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.528214 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.528357 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.028325498 +0000 UTC m=+158.017097000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.528390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.528733 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.02872596 +0000 UTC m=+158.017497472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.614035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" event={"ID":"cdcb0afe-af56-4b15-b712-5443e772c75d","Type":"ContainerStarted","Data":"5ff07bb3005ac0c7f16049b08f8e759fda4cef2f194db398d6acbabcd0507b99"} Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.614749 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rzprv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.614816 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.627100 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dw8lf" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.629338 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.629544 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.129512143 +0000 UTC m=+158.118283655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.629682 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.630055 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.130038059 +0000 UTC m=+158.118809571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.643416 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2x6x8" Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.730880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.731128 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.231091171 +0000 UTC m=+158.219862683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.732041 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.734556 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.234548806 +0000 UTC m=+158.223320308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.834430 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.834804 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.334768262 +0000 UTC m=+158.323539774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:22 crc kubenswrapper[4793]: I0126 22:42:22.935991 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:22 crc kubenswrapper[4793]: E0126 22:42:22.936727 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.43670513 +0000 UTC m=+158.425476642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.037445 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:23 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:23 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:23 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.037530 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.037816 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.041736 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.538156923 +0000 UTC m=+158.526928425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.138987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.139567 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.639553494 +0000 UTC m=+158.628325006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.241406 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.241670 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.741635747 +0000 UTC m=+158.730407259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.242035 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.242425 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.74241246 +0000 UTC m=+158.731183972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.308414 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2n4v" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.344000 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.344236 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.844194524 +0000 UTC m=+158.832966036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.344492 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.344865 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.844857334 +0000 UTC m=+158.833628846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.445375 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.445857 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:23.945808652 +0000 UTC m=+158.934580164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.547268 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.547735 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.047722779 +0000 UTC m=+159.036494291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.558817 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cq8pk" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.628691 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" event={"ID":"cdcb0afe-af56-4b15-b712-5443e772c75d","Type":"ContainerStarted","Data":"8209f96ec6a0c7644736fbf68690c1feddeb7374c6997cbc745c5fca5e82f7b0"} Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.629582 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rzprv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.629624 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.649377 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.649765 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.14973749 +0000 UTC m=+159.138509002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.671647 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.672572 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.675064 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.691918 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.751148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.753290 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.253277947 +0000 UTC m=+159.242049459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.853905 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.854084 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.354052959 +0000 UTC m=+159.342824471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.854164 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.854230 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tf6c\" (UniqueName: \"kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.854341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.854416 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.854795 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.354786592 +0000 UTC m=+159.343558104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.859488 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.860481 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.863490 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.878470 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.955603 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.955791 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.45575865 +0000 UTC m=+159.444530152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.956986 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.957088 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.957227 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tf6c\" (UniqueName: \"kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.957664 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.957812 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:23 crc kubenswrapper[4793]: E0126 22:42:23.958127 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.458115542 +0000 UTC m=+159.446887054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.958410 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.958553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.958885 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lhnk\" (UniqueName: \"kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.958833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.991896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tf6c\" (UniqueName: \"kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c\") pod \"certified-operators-78ml2\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:23 crc kubenswrapper[4793]: I0126 22:42:23.994479 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.030229 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:24 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:24 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:24 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.030470 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.062829 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.063500 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.063668 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.563637869 +0000 UTC m=+159.552409381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.064672 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lhnk\" (UniqueName: \"kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.064760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.065000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.065123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.065227 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.065357 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.565347141 +0000 UTC m=+159.554118653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.065365 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.065634 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.076936 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.094841 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lhnk\" (UniqueName: \"kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk\") pod \"community-operators-rmxpm\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.166091 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.166314 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.666281958 +0000 UTC m=+159.655053470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.166785 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.166899 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.166980 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.167159 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8sb6\" (UniqueName: \"kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.167448 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.667412463 +0000 UTC m=+159.656184015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.190506 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.272377 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.273108 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.273140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.273162 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8sb6\" (UniqueName: \"kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.274106 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.774086945 +0000 UTC m=+159.762858457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.280550 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.286564 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.296769 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.297891 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.329244 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8sb6\" (UniqueName: \"kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6\") pod \"certified-operators-nksn9\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.355858 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.364240 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.364399 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.380806 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.387586 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.387814 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.387882 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.387915 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.388334 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.888315006 +0000 UTC m=+159.877086518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.404217 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.450201 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.488655 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.488906 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.488953 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xlqx\" (UniqueName: \"kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.488991 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.489014 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.489038 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.489139 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:24.989122 +0000 UTC m=+159.977893512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.489205 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.547351 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.561398 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.593813 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.593905 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.593936 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.593981 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xlqx\" (UniqueName: \"kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.596235 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.596717 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.597115 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.097088891 +0000 UTC m=+160.085860593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.639231 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xlqx\" (UniqueName: \"kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx\") pod \"community-operators-dwxx5\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.658876 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerStarted","Data":"de4523d60ec6d9e2d26e56021a3eb00ace652444b6905d5221950bbb9e65363d"} Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.682254 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" event={"ID":"cdcb0afe-af56-4b15-b712-5443e772c75d","Type":"ContainerStarted","Data":"70a7a78ecb17a11e0094f2d7a84d2a15dbb03fc29b6a22c882bfb4eb0c933bf5"} Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.688635 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.695579 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.695892 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.195873714 +0000 UTC m=+160.184645226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.797152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.798448 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.298432981 +0000 UTC m=+160.287204493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.802408 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.847165 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:42:24 crc kubenswrapper[4793]: I0126 22:42:24.899282 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:24 crc kubenswrapper[4793]: E0126 22:42:24.899650 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.399618926 +0000 UTC m=+160.388390438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.001215 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.002199 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.502166022 +0000 UTC m=+160.490937534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.048445 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:25 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:25 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:25 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.048500 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.102459 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.102756 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.602738049 +0000 UTC m=+160.591509561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.193998 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.204131 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.204532 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.704516362 +0000 UTC m=+160.693287874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.262580 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.305277 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.305507 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.805488921 +0000 UTC m=+160.794260433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.327252 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.406815 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.407273 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:25.907247253 +0000 UTC m=+160.896018765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.508658 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.509112 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.009091649 +0000 UTC m=+160.997863161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.609940 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.610472 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.110443278 +0000 UTC m=+161.099214830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.688888 4793 generic.go:334] "Generic (PLEG): container finished" podID="5212b17b-4423-4662-b026-88d37b8e6780" containerID="cc61868d059ab224be57eb0623ed03f00405cf2898a129989b50ff353c392019" exitCode=0 Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.688965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" event={"ID":"5212b17b-4423-4662-b026-88d37b8e6780","Type":"ContainerDied","Data":"cc61868d059ab224be57eb0623ed03f00405cf2898a129989b50ff353c392019"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.693737 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" event={"ID":"cdcb0afe-af56-4b15-b712-5443e772c75d","Type":"ContainerStarted","Data":"08b1b0144f5aa5f7e37a6ac60796580a2bde9bcfed0b0717c28bc93d5c267506"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.695613 4793 generic.go:334] "Generic (PLEG): container finished" podID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerID="c998dd75c98b69795ee5fb5dadc61e23eb1a709df4bc6c6589c9107c4d8626e1" exitCode=0 Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.695688 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerDied","Data":"c998dd75c98b69795ee5fb5dadc61e23eb1a709df4bc6c6589c9107c4d8626e1"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.695714 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerStarted","Data":"b6c39e29a72b59338ca03d2810962928be44b01ddeee62784d5e0ce96eb5572d"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.697265 4793 generic.go:334] "Generic (PLEG): container finished" podID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerID="53ade5b98f6c11809a6f4eec9f04e9ed3dc37b0586623e2c12f5e703dca23ac1" exitCode=0 Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.697329 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerDied","Data":"53ade5b98f6c11809a6f4eec9f04e9ed3dc37b0586623e2c12f5e703dca23ac1"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.698774 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.699014 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"65214f74-e7f7-41fa-b0f4-3efa90a955a9","Type":"ContainerStarted","Data":"cab4441579a7c1ed11b73de8bfb583604040881f6d7e93a577f73fe130acfc37"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.699051 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"65214f74-e7f7-41fa-b0f4-3efa90a955a9","Type":"ContainerStarted","Data":"6941af7ed8417ffd25e4f7451b69776ad3fcfcb6a2db30e20798d831a58cdcca"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.701075 4793 generic.go:334] "Generic (PLEG): container finished" podID="ca21a12c-d4c3-414b-816c-858756e16147" containerID="3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2" exitCode=0 Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.701170 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerDied","Data":"3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.701493 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerStarted","Data":"23aef63c8314502249bed5cdebc0d033c6303855f2021cfbe103e4220fb0c8cd"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.703313 4793 generic.go:334] "Generic (PLEG): container finished" podID="1745bb39-265a-4318-8ede-bd919dc967d0" containerID="4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d" exitCode=0 Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.703354 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerDied","Data":"4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.703375 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerStarted","Data":"2d7d770e8ad7e7019ebe6a83e24e325ddc42f15802e0100dce38b289197d532e"} Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.710688 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.710902 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.21087327 +0000 UTC m=+161.199644782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.754474 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5kfbz" podStartSLOduration=11.754453775 podStartE2EDuration="11.754453775s" podCreationTimestamp="2026-01-26 22:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:25.753828756 +0000 UTC m=+160.742600278" watchObservedRunningTime="2026-01-26 22:42:25.754453775 +0000 UTC m=+160.743225287" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.770924 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.770898314 podStartE2EDuration="1.770898314s" podCreationTimestamp="2026-01-26 22:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:25.770821822 +0000 UTC m=+160.759593334" watchObservedRunningTime="2026-01-26 22:42:25.770898314 +0000 UTC m=+160.759669836" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.821606 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.822543 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.322515443 +0000 UTC m=+161.311287045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.823808 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.824818 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.835740 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.860415 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.860442 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.870627 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.872180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.874221 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.886113 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.922962 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923174 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923222 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923276 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923404 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwhp7\" (UniqueName: \"kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923422 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.923447 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.42342012 +0000 UTC m=+161.412191632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:25 crc kubenswrapper[4793]: I0126 22:42:25.923483 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:25 crc kubenswrapper[4793]: E0126 22:42:25.923801 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.423795111 +0000 UTC m=+161.412566623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.021682 4793 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.024833 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.025078 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.525045388 +0000 UTC m=+161.513816900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025269 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025343 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwhp7\" (UniqueName: \"kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025483 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025931 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.025997 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.026068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.034457 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:26 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:26 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:26 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.034554 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.050937 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwhp7\" (UniqueName: \"kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7\") pod \"redhat-marketplace-s7s8k\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.058028 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.127290 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.127825 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.627802871 +0000 UTC m=+161.616574383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.180628 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.195788 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.229568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.230506 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.730474642 +0000 UTC m=+161.719246194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.272827 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.274763 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.279848 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.334877 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.335304 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.335331 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.335403 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpdhr\" (UniqueName: \"kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.335810 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.835797103 +0000 UTC m=+161.824568615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.436722 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.436968 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpdhr\" (UniqueName: \"kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.437066 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.437106 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.438282 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.438372 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:26.938351489 +0000 UTC m=+161.927123011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.439081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.465373 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpdhr\" (UniqueName: \"kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr\") pod \"redhat-marketplace-xlnl6\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.486507 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.537894 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.538241 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.038227575 +0000 UTC m=+162.026999087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.539688 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:42:26 crc kubenswrapper[4793]: W0126 22:42:26.563959 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ece571b_df1f_4605_8127_b71fb41d2189.slice/crio-ea2f41ae1538a21ff971c4ee16cf82bf0b9ff3fb1449256723e833b410cd7678 WatchSource:0}: Error finding container ea2f41ae1538a21ff971c4ee16cf82bf0b9ff3fb1449256723e833b410cd7678: Status 404 returned error can't find the container with id ea2f41ae1538a21ff971c4ee16cf82bf0b9ff3fb1449256723e833b410cd7678 Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.606195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.639061 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.639277 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.139247405 +0000 UTC m=+162.128018907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.639544 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.639935 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.139927146 +0000 UTC m=+162.128698658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.721601 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.726717 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-nd4pl" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.727460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerStarted","Data":"ea2f41ae1538a21ff971c4ee16cf82bf0b9ff3fb1449256723e833b410cd7678"} Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.729478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"810b861e-ca2f-4912-a6a3-1cbfa7bb7804","Type":"ContainerStarted","Data":"ef6bbd118a6dfe07628822654d8a06a28a7b19157867d05ffe1aa1bb676b3c1d"} Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.736623 4793 generic.go:334] "Generic (PLEG): container finished" podID="65214f74-e7f7-41fa-b0f4-3efa90a955a9" containerID="cab4441579a7c1ed11b73de8bfb583604040881f6d7e93a577f73fe130acfc37" exitCode=0 Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.736804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"65214f74-e7f7-41fa-b0f4-3efa90a955a9","Type":"ContainerDied","Data":"cab4441579a7c1ed11b73de8bfb583604040881f6d7e93a577f73fe130acfc37"} Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.740557 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.740781 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.24074293 +0000 UTC m=+162.229514442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.740961 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.741842 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.241830383 +0000 UTC m=+162.230601905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.845511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.847840 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.347813294 +0000 UTC m=+162.336584866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.873240 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.875739 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.879014 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.921537 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.948545 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6z5\" (UniqueName: \"kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.948634 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.948675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.948709 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:26 crc kubenswrapper[4793]: E0126 22:42:26.949059 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 22:42:27.44904392 +0000 UTC m=+162.437815432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s8pqg" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 22:42:26 crc kubenswrapper[4793]: I0126 22:42:26.996086 4793 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T22:42:26.021721057Z","Handler":null,"Name":""} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.031882 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:27 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:27 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:27 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.031937 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.043594 4793 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.043655 4793 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.050351 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.050527 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm6z5\" (UniqueName: \"kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.050602 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.050628 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.051038 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.051563 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.054253 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.083901 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm6z5\" (UniqueName: \"kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5\") pod \"redhat-operators-s2bff\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.134048 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.151291 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.152749 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.173557 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.173596 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.191728 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.222284 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.250032 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s8pqg\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.251948 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume\") pod \"5212b17b-4423-4662-b026-88d37b8e6780\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.252112 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9mq4\" (UniqueName: \"kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4\") pod \"5212b17b-4423-4662-b026-88d37b8e6780\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.252133 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume\") pod \"5212b17b-4423-4662-b026-88d37b8e6780\" (UID: \"5212b17b-4423-4662-b026-88d37b8e6780\") " Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.252324 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:42:27 crc kubenswrapper[4793]: E0126 22:42:27.252615 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5212b17b-4423-4662-b026-88d37b8e6780" containerName="collect-profiles" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.252634 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5212b17b-4423-4662-b026-88d37b8e6780" containerName="collect-profiles" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.252768 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5212b17b-4423-4662-b026-88d37b8e6780" containerName="collect-profiles" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.254091 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume" (OuterVolumeSpecName: "config-volume") pod "5212b17b-4423-4662-b026-88d37b8e6780" (UID: "5212b17b-4423-4662-b026-88d37b8e6780"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.255270 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.257050 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.260163 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.262497 4793 patch_prober.go:28] interesting pod/console-f9d7485db-57ccj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.262570 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-57ccj" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.283665 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.283829 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4" (OuterVolumeSpecName: "kube-api-access-m9mq4") pod "5212b17b-4423-4662-b026-88d37b8e6780" (UID: "5212b17b-4423-4662-b026-88d37b8e6780"). InnerVolumeSpecName "kube-api-access-m9mq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.284638 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5212b17b-4423-4662-b026-88d37b8e6780" (UID: "5212b17b-4423-4662-b026-88d37b8e6780"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.297058 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.305809 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-2l78j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.305826 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-2l78j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.305853 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2l78j" podUID="57535671-8438-44b0-95f3-24679160fb8d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.305881 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2l78j" podUID="57535671-8438-44b0-95f3-24679160fb8d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355152 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355211 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355323 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlftt\" (UniqueName: \"kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355622 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9mq4\" (UniqueName: \"kubernetes.io/projected/5212b17b-4423-4662-b026-88d37b8e6780-kube-api-access-m9mq4\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355643 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5212b17b-4423-4662-b026-88d37b8e6780-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.355652 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5212b17b-4423-4662-b026-88d37b8e6780-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.456968 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlftt\" (UniqueName: \"kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.457059 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.457081 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.457542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.458043 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.479978 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlftt\" (UniqueName: \"kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt\") pod \"redhat-operators-ptb4m\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.506600 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.540444 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-g9vhm" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.556826 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4dq5" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.598673 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.809638 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.851596 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" event={"ID":"5212b17b-4423-4662-b026-88d37b8e6780","Type":"ContainerDied","Data":"f22596edcefced17885ef893f7cee49907f5d9cebcaf8c8793ed7795985ac763"} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.851644 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f22596edcefced17885ef893f7cee49907f5d9cebcaf8c8793ed7795985ac763" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.851724 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491110-fw8kx" Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.862405 4793 generic.go:334] "Generic (PLEG): container finished" podID="2ece571b-df1f-4605-8127-b71fb41d2189" containerID="df91a57438b106203c49f3ea641745b85b7989ad66fa5fea056e5bed77508967" exitCode=0 Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.862487 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerDied","Data":"df91a57438b106203c49f3ea641745b85b7989ad66fa5fea056e5bed77508967"} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.877739 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.958612 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"810b861e-ca2f-4912-a6a3-1cbfa7bb7804","Type":"ContainerStarted","Data":"16af8ddfe563636e3746dd358753066c3b4429f75246547371168f83e1f97609"} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.986173 4793 generic.go:334] "Generic (PLEG): container finished" podID="99c0251d-b287-4c00-a392-c43b8164e73d" containerID="202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437" exitCode=0 Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.987478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerDied","Data":"202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437"} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.987526 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerStarted","Data":"18efb665e88f94cc0c7aef3bce9135f6a8b698b34f0bd84750c1350537a84b4c"} Jan 26 22:42:27 crc kubenswrapper[4793]: I0126 22:42:27.987539 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.039142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.042871 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:28 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:28 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:28 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.042931 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:28 crc kubenswrapper[4793]: W0126 22:42:28.066681 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceed8696_7889_4e56_b430_dc4a6d46e1e6.slice/crio-4db1e2f2c7947383605b71ec06e390f57d260fbd62a9488814232435a291d5db WatchSource:0}: Error finding container 4db1e2f2c7947383605b71ec06e390f57d260fbd62a9488814232435a291d5db: Status 404 returned error can't find the container with id 4db1e2f2c7947383605b71ec06e390f57d260fbd62a9488814232435a291d5db Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.144443 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.156366 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:42:28 crc kubenswrapper[4793]: W0126 22:42:28.220828 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fbd096a_9989_4aa8_8c4d_e77ca47aee86.slice/crio-21253583ba7cd52b82e807bf5d28038f8e777d6d397df9fcae9a16cd614b0f2c WatchSource:0}: Error finding container 21253583ba7cd52b82e807bf5d28038f8e777d6d397df9fcae9a16cd614b0f2c: Status 404 returned error can't find the container with id 21253583ba7cd52b82e807bf5d28038f8e777d6d397df9fcae9a16cd614b0f2c Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.455823 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.581726 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access\") pod \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.581828 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir\") pod \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\" (UID: \"65214f74-e7f7-41fa-b0f4-3efa90a955a9\") " Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.582357 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "65214f74-e7f7-41fa-b0f4-3efa90a955a9" (UID: "65214f74-e7f7-41fa-b0f4-3efa90a955a9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.593467 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "65214f74-e7f7-41fa-b0f4-3efa90a955a9" (UID: "65214f74-e7f7-41fa-b0f4-3efa90a955a9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.683158 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:28 crc kubenswrapper[4793]: I0126 22:42:28.683420 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65214f74-e7f7-41fa-b0f4-3efa90a955a9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.048509 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:29 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:29 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:29 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.048582 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.085584 4793 generic.go:334] "Generic (PLEG): container finished" podID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerID="43bfb5939bd86706aa081141535e22be807fc9c0577dcd35f30a1721eb3de604" exitCode=0 Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.085692 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerDied","Data":"43bfb5939bd86706aa081141535e22be807fc9c0577dcd35f30a1721eb3de604"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.085724 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerStarted","Data":"ab28d055e5e6d116639cc20c8075ccee993ec8ea1d3f84c9e75e2020b4dbfa88"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.103705 4793 generic.go:334] "Generic (PLEG): container finished" podID="810b861e-ca2f-4912-a6a3-1cbfa7bb7804" containerID="16af8ddfe563636e3746dd358753066c3b4429f75246547371168f83e1f97609" exitCode=0 Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.103818 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"810b861e-ca2f-4912-a6a3-1cbfa7bb7804","Type":"ContainerDied","Data":"16af8ddfe563636e3746dd358753066c3b4429f75246547371168f83e1f97609"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.127787 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"65214f74-e7f7-41fa-b0f4-3efa90a955a9","Type":"ContainerDied","Data":"6941af7ed8417ffd25e4f7451b69776ad3fcfcb6a2db30e20798d831a58cdcca"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.127856 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6941af7ed8417ffd25e4f7451b69776ad3fcfcb6a2db30e20798d831a58cdcca" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.127920 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.147094 4793 generic.go:334] "Generic (PLEG): container finished" podID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerID="b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec" exitCode=0 Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.147210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerDied","Data":"b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.147249 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerStarted","Data":"21253583ba7cd52b82e807bf5d28038f8e777d6d397df9fcae9a16cd614b0f2c"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.175995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" event={"ID":"ceed8696-7889-4e56-b430-dc4a6d46e1e6","Type":"ContainerStarted","Data":"3d5ea8649cdaffecebca0a6eab6267672b0e1746c4d8f1584f0ae6957690d95a"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.176057 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" event={"ID":"ceed8696-7889-4e56-b430-dc4a6d46e1e6","Type":"ContainerStarted","Data":"4db1e2f2c7947383605b71ec06e390f57d260fbd62a9488814232435a291d5db"} Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.177432 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.226756 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" podStartSLOduration=137.226725002 podStartE2EDuration="2m17.226725002s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:29.213784679 +0000 UTC m=+164.202556191" watchObservedRunningTime="2026-01-26 22:42:29.226725002 +0000 UTC m=+164.215496514" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.456852 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.606585 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir\") pod \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.606676 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access\") pod \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\" (UID: \"810b861e-ca2f-4912-a6a3-1cbfa7bb7804\") " Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.606733 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "810b861e-ca2f-4912-a6a3-1cbfa7bb7804" (UID: "810b861e-ca2f-4912-a6a3-1cbfa7bb7804"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.607095 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.614292 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "810b861e-ca2f-4912-a6a3-1cbfa7bb7804" (UID: "810b861e-ca2f-4912-a6a3-1cbfa7bb7804"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:42:29 crc kubenswrapper[4793]: I0126 22:42:29.710243 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810b861e-ca2f-4912-a6a3-1cbfa7bb7804-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:30 crc kubenswrapper[4793]: I0126 22:42:30.030985 4793 patch_prober.go:28] interesting pod/router-default-5444994796-xpkxl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 22:42:30 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 26 22:42:30 crc kubenswrapper[4793]: [+]process-running ok Jan 26 22:42:30 crc kubenswrapper[4793]: healthz check failed Jan 26 22:42:30 crc kubenswrapper[4793]: I0126 22:42:30.031048 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xpkxl" podUID="6b2e3c07-cf6d-4c97-b19b-dacc24b947d6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 22:42:30 crc kubenswrapper[4793]: I0126 22:42:30.278058 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 22:42:30 crc kubenswrapper[4793]: I0126 22:42:30.278039 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"810b861e-ca2f-4912-a6a3-1cbfa7bb7804","Type":"ContainerDied","Data":"ef6bbd118a6dfe07628822654d8a06a28a7b19157867d05ffe1aa1bb676b3c1d"} Jan 26 22:42:30 crc kubenswrapper[4793]: I0126 22:42:30.278119 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6bbd118a6dfe07628822654d8a06a28a7b19157867d05ffe1aa1bb676b3c1d" Jan 26 22:42:31 crc kubenswrapper[4793]: I0126 22:42:31.035565 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:31 crc kubenswrapper[4793]: I0126 22:42:31.042742 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xpkxl" Jan 26 22:42:32 crc kubenswrapper[4793]: I0126 22:42:32.868889 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jv28b" Jan 26 22:42:34 crc kubenswrapper[4793]: I0126 22:42:34.805634 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:34 crc kubenswrapper[4793]: I0126 22:42:34.833029 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc-metrics-certs\") pod \"network-metrics-daemon-7rl9w\" (UID: \"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc\") " pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:34 crc kubenswrapper[4793]: I0126 22:42:34.914538 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7rl9w" Jan 26 22:42:37 crc kubenswrapper[4793]: I0126 22:42:37.275495 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:37 crc kubenswrapper[4793]: I0126 22:42:37.279043 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:42:37 crc kubenswrapper[4793]: I0126 22:42:37.322777 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2l78j" Jan 26 22:42:42 crc kubenswrapper[4793]: I0126 22:42:42.378262 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:42 crc kubenswrapper[4793]: I0126 22:42:42.379440 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" containerID="cri-o://536ca99918fb3552cb94df324b5d23422c69b27ef2ccfca8c0fb91fe19520070" gracePeriod=30 Jan 26 22:42:42 crc kubenswrapper[4793]: I0126 22:42:42.392214 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:42 crc kubenswrapper[4793]: I0126 22:42:42.392540 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" containerID="cri-o://96bf08a8416db674215d43cafdaad7142ec78bb80457a16a6b41c8934a8f7f31" gracePeriod=30 Jan 26 22:42:43 crc kubenswrapper[4793]: I0126 22:42:43.461208 4793 generic.go:334] "Generic (PLEG): container finished" podID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerID="96bf08a8416db674215d43cafdaad7142ec78bb80457a16a6b41c8934a8f7f31" exitCode=0 Jan 26 22:42:43 crc kubenswrapper[4793]: I0126 22:42:43.461282 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" event={"ID":"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a","Type":"ContainerDied","Data":"96bf08a8416db674215d43cafdaad7142ec78bb80457a16a6b41c8934a8f7f31"} Jan 26 22:42:43 crc kubenswrapper[4793]: I0126 22:42:43.464632 4793 generic.go:334] "Generic (PLEG): container finished" podID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerID="536ca99918fb3552cb94df324b5d23422c69b27ef2ccfca8c0fb91fe19520070" exitCode=0 Jan 26 22:42:43 crc kubenswrapper[4793]: I0126 22:42:43.464691 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" event={"ID":"58669a0f-eecb-49dd-9637-af4dc30cd20d","Type":"ContainerDied","Data":"536ca99918fb3552cb94df324b5d23422c69b27ef2ccfca8c0fb91fe19520070"} Jan 26 22:42:46 crc kubenswrapper[4793]: I0126 22:42:46.778290 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r5m9x container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 22:42:46 crc kubenswrapper[4793]: I0126 22:42:46.778761 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 22:42:47 crc kubenswrapper[4793]: I0126 22:42:47.178337 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2jn5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 22:42:47 crc kubenswrapper[4793]: I0126 22:42:47.178455 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 22:42:47 crc kubenswrapper[4793]: I0126 22:42:47.306608 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:42:48 crc kubenswrapper[4793]: I0126 22:42:48.323307 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:42:48 crc kubenswrapper[4793]: I0126 22:42:48.323936 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:42:53 crc kubenswrapper[4793]: E0126 22:42:53.131534 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 22:42:53 crc kubenswrapper[4793]: E0126 22:42:53.132126 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lhnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rmxpm_openshift-marketplace(1ea0b73a-1820-4411-bbbf-acd3d22899e0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 22:42:53 crc kubenswrapper[4793]: E0126 22:42:53.133384 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rmxpm" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" Jan 26 22:42:53 crc kubenswrapper[4793]: I0126 22:42:53.633995 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.683596 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rmxpm" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.786294 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.786498 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm6z5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s2bff_openshift-marketplace(b0a722d2-056a-4bf2-a33c-719ee8aba7a8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.787712 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-s2bff" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.832112 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.832369 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpdhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xlnl6_openshift-marketplace(99c0251d-b287-4c00-a392-c43b8164e73d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 22:42:56 crc kubenswrapper[4793]: E0126 22:42:56.833572 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xlnl6" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" Jan 26 22:42:57 crc kubenswrapper[4793]: I0126 22:42:57.777154 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r5m9x container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 22:42:57 crc kubenswrapper[4793]: I0126 22:42:57.777656 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.137944 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rxzcb" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.177973 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2jn5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.178061 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.267849 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xlnl6" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.267930 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s2bff" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.332001 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.332183 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tf6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-78ml2_openshift-marketplace(ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.334637 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-78ml2" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.416867 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.422491 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.467605 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.467885 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.467898 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.467913 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810b861e-ca2f-4912-a6a3-1cbfa7bb7804" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.467919 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="810b861e-ca2f-4912-a6a3-1cbfa7bb7804" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.467931 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.467937 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.467947 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65214f74-e7f7-41fa-b0f4-3efa90a955a9" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.467953 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="65214f74-e7f7-41fa-b0f4-3efa90a955a9" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.468090 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" containerName="route-controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.468104 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="65214f74-e7f7-41fa-b0f4-3efa90a955a9" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.468115 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" containerName="controller-manager" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.468125 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="810b861e-ca2f-4912-a6a3-1cbfa7bb7804" containerName="pruner" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.468565 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488046 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles\") pod \"58669a0f-eecb-49dd-9637-af4dc30cd20d\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488144 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca\") pod \"58669a0f-eecb-49dd-9637-af4dc30cd20d\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488241 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca\") pod \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488260 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config\") pod \"58669a0f-eecb-49dd-9637-af4dc30cd20d\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488281 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p74mk\" (UniqueName: \"kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk\") pod \"58669a0f-eecb-49dd-9637-af4dc30cd20d\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488360 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config\") pod \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488386 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert\") pod \"58669a0f-eecb-49dd-9637-af4dc30cd20d\" (UID: \"58669a0f-eecb-49dd-9637-af4dc30cd20d\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488408 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmqdr\" (UniqueName: \"kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr\") pod \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488432 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert\") pod \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\" (UID: \"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a\") " Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488602 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488629 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87fjs\" (UniqueName: \"kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488684 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.488721 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.489544 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "58669a0f-eecb-49dd-9637-af4dc30cd20d" (UID: "58669a0f-eecb-49dd-9637-af4dc30cd20d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.491733 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca" (OuterVolumeSpecName: "client-ca") pod "58669a0f-eecb-49dd-9637-af4dc30cd20d" (UID: "58669a0f-eecb-49dd-9637-af4dc30cd20d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.492368 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca" (OuterVolumeSpecName: "client-ca") pod "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" (UID: "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.492595 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config" (OuterVolumeSpecName: "config") pod "58669a0f-eecb-49dd-9637-af4dc30cd20d" (UID: "58669a0f-eecb-49dd-9637-af4dc30cd20d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.492842 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config" (OuterVolumeSpecName: "config") pod "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" (UID: "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.499804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr" (OuterVolumeSpecName: "kube-api-access-fmqdr") pod "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" (UID: "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a"). InnerVolumeSpecName "kube-api-access-fmqdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.504011 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "58669a0f-eecb-49dd-9637-af4dc30cd20d" (UID: "58669a0f-eecb-49dd-9637-af4dc30cd20d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.517411 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk" (OuterVolumeSpecName: "kube-api-access-p74mk") pod "58669a0f-eecb-49dd-9637-af4dc30cd20d" (UID: "58669a0f-eecb-49dd-9637-af4dc30cd20d"). InnerVolumeSpecName "kube-api-access-p74mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.524594 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.529298 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.529438 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8sb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nksn9_openshift-marketplace(ca21a12c-d4c3-414b-816c-858756e16147): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.530813 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nksn9" podUID="ca21a12c-d4c3-414b-816c-858756e16147" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.535468 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" (UID: "0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.571678 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" event={"ID":"58669a0f-eecb-49dd-9637-af4dc30cd20d","Type":"ContainerDied","Data":"4ef809a2826a84fc4c6df7b47339752d4935d8cadd1765fb919c6712c00c442f"} Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.571762 4793 scope.go:117] "RemoveContainer" containerID="536ca99918fb3552cb94df324b5d23422c69b27ef2ccfca8c0fb91fe19520070" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.571779 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2jn5q" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.576955 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.581403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x" event={"ID":"0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a","Type":"ContainerDied","Data":"35ef9fd4d9f1a8c21342432b8e94e83c86e086b0ceab6b71120d185cc8b1ccf3"} Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590077 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590134 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590159 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87fjs\" (UniqueName: \"kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590224 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590262 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590324 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p74mk\" (UniqueName: \"kubernetes.io/projected/58669a0f-eecb-49dd-9637-af4dc30cd20d-kube-api-access-p74mk\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590334 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590343 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58669a0f-eecb-49dd-9637-af4dc30cd20d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590372 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmqdr\" (UniqueName: \"kubernetes.io/projected/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-kube-api-access-fmqdr\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590381 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590389 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590398 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590406 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.590414 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58669a0f-eecb-49dd-9637-af4dc30cd20d-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.591722 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.594081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.599101 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.601170 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-78ml2" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" Jan 26 22:42:58 crc kubenswrapper[4793]: E0126 22:42:58.601292 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nksn9" podUID="ca21a12c-d4c3-414b-816c-858756e16147" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.601662 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.610800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87fjs\" (UniqueName: \"kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs\") pod \"controller-manager-5d76f5f7f7-rwrlq\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.615660 4793 scope.go:117] "RemoveContainer" containerID="96bf08a8416db674215d43cafdaad7142ec78bb80457a16a6b41c8934a8f7f31" Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.635590 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7rl9w"] Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.643472 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.645678 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r5m9x"] Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.654967 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.655023 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2jn5q"] Jan 26 22:42:58 crc kubenswrapper[4793]: W0126 22:42:58.658934 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2174e8d7_3f7f_4e0e_b17e_1f0053a52ccc.slice/crio-5021dd9f3ab9671a9f402dd730fd4cae5d8a6557528c9a73dcac8eb73239b3b3 WatchSource:0}: Error finding container 5021dd9f3ab9671a9f402dd730fd4cae5d8a6557528c9a73dcac8eb73239b3b3: Status 404 returned error can't find the container with id 5021dd9f3ab9671a9f402dd730fd4cae5d8a6557528c9a73dcac8eb73239b3b3 Jan 26 22:42:58 crc kubenswrapper[4793]: I0126 22:42:58.790367 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.057656 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:42:59 crc kubenswrapper[4793]: W0126 22:42:59.082754 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55c1b812_ff9c_440b_ace5_60706632e1e0.slice/crio-3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc WatchSource:0}: Error finding container 3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc: Status 404 returned error can't find the container with id 3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.596549 4793 generic.go:334] "Generic (PLEG): container finished" podID="2ece571b-df1f-4605-8127-b71fb41d2189" containerID="9deb6cc715d13e34300108f710dc2b32d0af0395f968c159a45df8f3c1fa5d0c" exitCode=0 Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.596629 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerDied","Data":"9deb6cc715d13e34300108f710dc2b32d0af0395f968c159a45df8f3c1fa5d0c"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.599823 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" event={"ID":"55c1b812-ff9c-440b-ace5-60706632e1e0","Type":"ContainerStarted","Data":"4e9a91212ef600945af935beead74da745fac2e85779d11a2bbe732917961c99"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.599879 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" event={"ID":"55c1b812-ff9c-440b-ace5-60706632e1e0","Type":"ContainerStarted","Data":"3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.600176 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.604341 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" event={"ID":"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc","Type":"ContainerStarted","Data":"fb2c9fd91420503e13b5501b06b30c9ba2287c596abc2e41929c464cc9b9de23"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.604374 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" event={"ID":"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc","Type":"ContainerStarted","Data":"2243204da7184719e2d6f61ad610cf88e055053f094f80b696d3f43a64d755e6"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.604384 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7rl9w" event={"ID":"2174e8d7-3f7f-4e0e-b17e-1f0053a52ccc","Type":"ContainerStarted","Data":"5021dd9f3ab9671a9f402dd730fd4cae5d8a6557528c9a73dcac8eb73239b3b3"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.606276 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.606632 4793 generic.go:334] "Generic (PLEG): container finished" podID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerID="1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d" exitCode=0 Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.606693 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerDied","Data":"1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.610897 4793 generic.go:334] "Generic (PLEG): container finished" podID="1745bb39-265a-4318-8ede-bd919dc967d0" containerID="bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065" exitCode=0 Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.610936 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerDied","Data":"bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065"} Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.677801 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7rl9w" podStartSLOduration=167.677784292 podStartE2EDuration="2m47.677784292s" podCreationTimestamp="2026-01-26 22:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:59.674581195 +0000 UTC m=+194.663352707" watchObservedRunningTime="2026-01-26 22:42:59.677784292 +0000 UTC m=+194.666555804" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.690744 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" podStartSLOduration=17.690725396 podStartE2EDuration="17.690725396s" podCreationTimestamp="2026-01-26 22:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:42:59.690081176 +0000 UTC m=+194.678852688" watchObservedRunningTime="2026-01-26 22:42:59.690725396 +0000 UTC m=+194.679496908" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.766838 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a" path="/var/lib/kubelet/pods/0a7b22ec-f4a9-4d80-a1d4-1ea270cf831a/volumes" Jan 26 22:42:59 crc kubenswrapper[4793]: I0126 22:42:59.767623 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58669a0f-eecb-49dd-9637-af4dc30cd20d" path="/var/lib/kubelet/pods/58669a0f-eecb-49dd-9637-af4dc30cd20d/volumes" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.470936 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.472520 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.476275 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.481922 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.482478 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.482879 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.483932 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.484135 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.491412 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.519580 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.519686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftrlv\" (UniqueName: \"kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.519786 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.519885 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.620779 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftrlv\" (UniqueName: \"kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.620866 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.620893 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.620933 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.622181 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.622458 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.622514 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerStarted","Data":"b983778d2b376f12a054d385d4e78fd86cd0c3fd1e88a5f4df510c38cf861052"} Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.625490 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerStarted","Data":"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7"} Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.630839 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerStarted","Data":"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878"} Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.632252 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.643567 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftrlv\" (UniqueName: \"kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv\") pod \"route-controller-manager-c46bc4746-87xtp\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.645766 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s7s8k" podStartSLOduration=3.49522839 podStartE2EDuration="35.645750339s" podCreationTimestamp="2026-01-26 22:42:25 +0000 UTC" firstStartedPulling="2026-01-26 22:42:27.911607674 +0000 UTC m=+162.900379186" lastFinishedPulling="2026-01-26 22:43:00.062129613 +0000 UTC m=+195.050901135" observedRunningTime="2026-01-26 22:43:00.64196756 +0000 UTC m=+195.630739072" watchObservedRunningTime="2026-01-26 22:43:00.645750339 +0000 UTC m=+195.634521841" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.668063 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dwxx5" podStartSLOduration=2.375496819 podStartE2EDuration="36.668042058s" podCreationTimestamp="2026-01-26 22:42:24 +0000 UTC" firstStartedPulling="2026-01-26 22:42:25.705071574 +0000 UTC m=+160.693843086" lastFinishedPulling="2026-01-26 22:42:59.997616813 +0000 UTC m=+194.986388325" observedRunningTime="2026-01-26 22:43:00.663820765 +0000 UTC m=+195.652592297" watchObservedRunningTime="2026-01-26 22:43:00.668042058 +0000 UTC m=+195.656813570" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.684574 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ptb4m" podStartSLOduration=2.594952834 podStartE2EDuration="33.684556641s" podCreationTimestamp="2026-01-26 22:42:27 +0000 UTC" firstStartedPulling="2026-01-26 22:42:29.149943278 +0000 UTC m=+164.138714790" lastFinishedPulling="2026-01-26 22:43:00.239547085 +0000 UTC m=+195.228318597" observedRunningTime="2026-01-26 22:43:00.68217353 +0000 UTC m=+195.670945042" watchObservedRunningTime="2026-01-26 22:43:00.684556641 +0000 UTC m=+195.673328153" Jan 26 22:43:00 crc kubenswrapper[4793]: I0126 22:43:00.798779 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.048463 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.657489 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" event={"ID":"5d50b289-073a-4890-ab14-8ce2138b21b6","Type":"ContainerStarted","Data":"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2"} Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.658036 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" event={"ID":"5d50b289-073a-4890-ab14-8ce2138b21b6","Type":"ContainerStarted","Data":"07b1ed7ba1378af9b686711c77d9f23a688c635a72bae5bc43266845b84ec09a"} Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.659684 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.681543 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:01 crc kubenswrapper[4793]: I0126 22:43:01.682107 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" podStartSLOduration=19.682085493 podStartE2EDuration="19.682085493s" podCreationTimestamp="2026-01-26 22:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:01.677795196 +0000 UTC m=+196.666566718" watchObservedRunningTime="2026-01-26 22:43:01.682085493 +0000 UTC m=+196.670857005" Jan 26 22:43:02 crc kubenswrapper[4793]: I0126 22:43:02.341340 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:43:02 crc kubenswrapper[4793]: I0126 22:43:02.427100 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:02 crc kubenswrapper[4793]: I0126 22:43:02.662339 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" podUID="55c1b812-ff9c-440b-ace5-60706632e1e0" containerName="controller-manager" containerID="cri-o://4e9a91212ef600945af935beead74da745fac2e85779d11a2bbe732917961c99" gracePeriod=30 Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.668610 4793 generic.go:334] "Generic (PLEG): container finished" podID="55c1b812-ff9c-440b-ace5-60706632e1e0" containerID="4e9a91212ef600945af935beead74da745fac2e85779d11a2bbe732917961c99" exitCode=0 Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.668691 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" event={"ID":"55c1b812-ff9c-440b-ace5-60706632e1e0","Type":"ContainerDied","Data":"4e9a91212ef600945af935beead74da745fac2e85779d11a2bbe732917961c99"} Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.669026 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" event={"ID":"55c1b812-ff9c-440b-ace5-60706632e1e0","Type":"ContainerDied","Data":"3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc"} Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.669047 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b30b12b794bb67fe4a7c6b695a9e99c1546db39c74c75cf3941c7485309a5fc" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.669134 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" podUID="5d50b289-073a-4890-ab14-8ce2138b21b6" containerName="route-controller-manager" containerID="cri-o://8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2" gracePeriod=30 Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.685578 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.714458 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:03 crc kubenswrapper[4793]: E0126 22:43:03.714747 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c1b812-ff9c-440b-ace5-60706632e1e0" containerName="controller-manager" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.714767 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c1b812-ff9c-440b-ace5-60706632e1e0" containerName="controller-manager" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.714891 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c1b812-ff9c-440b-ace5-60706632e1e0" containerName="controller-manager" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.715379 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.726465 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775271 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert\") pod \"55c1b812-ff9c-440b-ace5-60706632e1e0\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775362 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca\") pod \"55c1b812-ff9c-440b-ace5-60706632e1e0\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775402 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87fjs\" (UniqueName: \"kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs\") pod \"55c1b812-ff9c-440b-ace5-60706632e1e0\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775436 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles\") pod \"55c1b812-ff9c-440b-ace5-60706632e1e0\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775466 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config\") pod \"55c1b812-ff9c-440b-ace5-60706632e1e0\" (UID: \"55c1b812-ff9c-440b-ace5-60706632e1e0\") " Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775558 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775609 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775641 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775688 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.775717 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptx2f\" (UniqueName: \"kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.777992 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "55c1b812-ff9c-440b-ace5-60706632e1e0" (UID: "55c1b812-ff9c-440b-ace5-60706632e1e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.778432 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config" (OuterVolumeSpecName: "config") pod "55c1b812-ff9c-440b-ace5-60706632e1e0" (UID: "55c1b812-ff9c-440b-ace5-60706632e1e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.778487 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "55c1b812-ff9c-440b-ace5-60706632e1e0" (UID: "55c1b812-ff9c-440b-ace5-60706632e1e0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.786641 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs" (OuterVolumeSpecName: "kube-api-access-87fjs") pod "55c1b812-ff9c-440b-ace5-60706632e1e0" (UID: "55c1b812-ff9c-440b-ace5-60706632e1e0"). InnerVolumeSpecName "kube-api-access-87fjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.786906 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55c1b812-ff9c-440b-ace5-60706632e1e0" (UID: "55c1b812-ff9c-440b-ace5-60706632e1e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909287 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptx2f\" (UniqueName: \"kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909418 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909477 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c1b812-ff9c-440b-ace5-60706632e1e0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909487 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909496 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87fjs\" (UniqueName: \"kubernetes.io/projected/55c1b812-ff9c-440b-ace5-60706632e1e0-kube-api-access-87fjs\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909508 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.909516 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c1b812-ff9c-440b-ace5-60706632e1e0-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.910511 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.911095 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.911985 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.915515 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:03 crc kubenswrapper[4793]: I0126 22:43:03.934217 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptx2f\" (UniqueName: \"kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f\") pod \"controller-manager-84c7c9c945-fvjfr\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.117840 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.129378 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.215119 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftrlv\" (UniqueName: \"kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv\") pod \"5d50b289-073a-4890-ab14-8ce2138b21b6\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.215247 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert\") pod \"5d50b289-073a-4890-ab14-8ce2138b21b6\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.215362 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca\") pod \"5d50b289-073a-4890-ab14-8ce2138b21b6\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.215408 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config\") pod \"5d50b289-073a-4890-ab14-8ce2138b21b6\" (UID: \"5d50b289-073a-4890-ab14-8ce2138b21b6\") " Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.216366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca" (OuterVolumeSpecName: "client-ca") pod "5d50b289-073a-4890-ab14-8ce2138b21b6" (UID: "5d50b289-073a-4890-ab14-8ce2138b21b6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.216617 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config" (OuterVolumeSpecName: "config") pod "5d50b289-073a-4890-ab14-8ce2138b21b6" (UID: "5d50b289-073a-4890-ab14-8ce2138b21b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.218764 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d50b289-073a-4890-ab14-8ce2138b21b6" (UID: "5d50b289-073a-4890-ab14-8ce2138b21b6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.221415 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv" (OuterVolumeSpecName: "kube-api-access-ftrlv") pod "5d50b289-073a-4890-ab14-8ce2138b21b6" (UID: "5d50b289-073a-4890-ab14-8ce2138b21b6"). InnerVolumeSpecName "kube-api-access-ftrlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.317558 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.320344 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d50b289-073a-4890-ab14-8ce2138b21b6-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.320393 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftrlv\" (UniqueName: \"kubernetes.io/projected/5d50b289-073a-4890-ab14-8ce2138b21b6-kube-api-access-ftrlv\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.320467 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d50b289-073a-4890-ab14-8ce2138b21b6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.345998 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.676374 4793 generic.go:334] "Generic (PLEG): container finished" podID="5d50b289-073a-4890-ab14-8ce2138b21b6" containerID="8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2" exitCode=0 Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.676442 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.676482 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" event={"ID":"5d50b289-073a-4890-ab14-8ce2138b21b6","Type":"ContainerDied","Data":"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2"} Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.676581 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp" event={"ID":"5d50b289-073a-4890-ab14-8ce2138b21b6","Type":"ContainerDied","Data":"07b1ed7ba1378af9b686711c77d9f23a688c635a72bae5bc43266845b84ec09a"} Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.676616 4793 scope.go:117] "RemoveContainer" containerID="8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.690056 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" event={"ID":"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a","Type":"ContainerStarted","Data":"45b69ac25c01759287e2d0a5eea45d0fa774df75dbd92ec0b24b338a87d0949d"} Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.690096 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.700233 4793 scope.go:117] "RemoveContainer" containerID="8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2" Jan 26 22:43:04 crc kubenswrapper[4793]: E0126 22:43:04.704317 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2\": container with ID starting with 8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2 not found: ID does not exist" containerID="8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.704366 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2"} err="failed to get container status \"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2\": rpc error: code = NotFound desc = could not find container \"8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2\": container with ID starting with 8d6ed5b587324d5e899bf8687f0edb96daba563b1cc9274b7886a9a4fd14f0d2 not found: ID does not exist" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.730882 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.735059 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d76f5f7f7-rwrlq"] Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.737735 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.740809 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c46bc4746-87xtp"] Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.803610 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:04 crc kubenswrapper[4793]: I0126 22:43:04.803682 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.014016 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.698833 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" event={"ID":"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a","Type":"ContainerStarted","Data":"c8090fdd1369c91b7dd744e02ec61db84949669bb837332b7d8599799222e44f"} Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.700743 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.706029 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.725897 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" podStartSLOduration=3.72587738 podStartE2EDuration="3.72587738s" podCreationTimestamp="2026-01-26 22:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:05.72001635 +0000 UTC m=+200.708787882" watchObservedRunningTime="2026-01-26 22:43:05.72587738 +0000 UTC m=+200.714648892" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.756406 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.770741 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c1b812-ff9c-440b-ace5-60706632e1e0" path="/var/lib/kubelet/pods/55c1b812-ff9c-440b-ace5-60706632e1e0/volumes" Jan 26 22:43:05 crc kubenswrapper[4793]: I0126 22:43:05.771742 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d50b289-073a-4890-ab14-8ce2138b21b6" path="/var/lib/kubelet/pods/5d50b289-073a-4890-ab14-8ce2138b21b6/volumes" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.196115 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.196233 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.242899 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.466281 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:06 crc kubenswrapper[4793]: E0126 22:43:06.466561 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d50b289-073a-4890-ab14-8ce2138b21b6" containerName="route-controller-manager" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.466584 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d50b289-073a-4890-ab14-8ce2138b21b6" containerName="route-controller-manager" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.466779 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d50b289-073a-4890-ab14-8ce2138b21b6" containerName="route-controller-manager" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.467395 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.473698 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.473835 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.473886 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.474034 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.474067 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.474364 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.480836 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.548532 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.549151 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.549182 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwct\" (UniqueName: \"kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.549229 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.650488 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.650568 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.650595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdwct\" (UniqueName: \"kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.650633 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.651828 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.653024 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.658405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.671637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdwct\" (UniqueName: \"kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct\") pod \"route-controller-manager-5695d9c55d-2wvxs\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.746560 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:43:06 crc kubenswrapper[4793]: I0126 22:43:06.784014 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.166905 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.201298 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:07 crc kubenswrapper[4793]: W0126 22:43:07.208742 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12030821_2df1_4742_8879_2885ea6315da.slice/crio-59cadafa8d535574a7b73e67918ff7235119f1b552a808030d700e00a0cd95a9 WatchSource:0}: Error finding container 59cadafa8d535574a7b73e67918ff7235119f1b552a808030d700e00a0cd95a9: Status 404 returned error can't find the container with id 59cadafa8d535574a7b73e67918ff7235119f1b552a808030d700e00a0cd95a9 Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.599413 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.599484 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.671344 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.710232 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" event={"ID":"12030821-2df1-4742-8879-2885ea6315da","Type":"ContainerStarted","Data":"53dcef64e09d644b0cdaca29294dae740c9b20757947cb7581ca4de5d32dd853"} Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.710281 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" event={"ID":"12030821-2df1-4742-8879-2885ea6315da","Type":"ContainerStarted","Data":"59cadafa8d535574a7b73e67918ff7235119f1b552a808030d700e00a0cd95a9"} Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.710588 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.710892 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dwxx5" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="registry-server" containerID="cri-o://2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878" gracePeriod=2 Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.738275 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" podStartSLOduration=5.738252825 podStartE2EDuration="5.738252825s" podCreationTimestamp="2026-01-26 22:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:07.734538259 +0000 UTC m=+202.723309771" watchObservedRunningTime="2026-01-26 22:43:07.738252825 +0000 UTC m=+202.727024337" Jan 26 22:43:07 crc kubenswrapper[4793]: I0126 22:43:07.751456 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.213846 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.215122 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.217874 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.218262 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.226020 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.271419 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.279475 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.279547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.332349 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.380594 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.380673 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.380745 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.402634 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.482067 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities\") pod \"1745bb39-265a-4318-8ede-bd919dc967d0\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.482538 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content\") pod \"1745bb39-265a-4318-8ede-bd919dc967d0\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.482782 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xlqx\" (UniqueName: \"kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx\") pod \"1745bb39-265a-4318-8ede-bd919dc967d0\" (UID: \"1745bb39-265a-4318-8ede-bd919dc967d0\") " Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.483411 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities" (OuterVolumeSpecName: "utilities") pod "1745bb39-265a-4318-8ede-bd919dc967d0" (UID: "1745bb39-265a-4318-8ede-bd919dc967d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.485216 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.493322 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx" (OuterVolumeSpecName: "kube-api-access-4xlqx") pod "1745bb39-265a-4318-8ede-bd919dc967d0" (UID: "1745bb39-265a-4318-8ede-bd919dc967d0"). InnerVolumeSpecName "kube-api-access-4xlqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.533772 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.560309 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1745bb39-265a-4318-8ede-bd919dc967d0" (UID: "1745bb39-265a-4318-8ede-bd919dc967d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.586825 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745bb39-265a-4318-8ede-bd919dc967d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.586874 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xlqx\" (UniqueName: \"kubernetes.io/projected/1745bb39-265a-4318-8ede-bd919dc967d0-kube-api-access-4xlqx\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.725305 4793 generic.go:334] "Generic (PLEG): container finished" podID="1745bb39-265a-4318-8ede-bd919dc967d0" containerID="2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878" exitCode=0 Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.725485 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerDied","Data":"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878"} Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.725949 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwxx5" event={"ID":"1745bb39-265a-4318-8ede-bd919dc967d0","Type":"ContainerDied","Data":"2d7d770e8ad7e7019ebe6a83e24e325ddc42f15802e0100dce38b289197d532e"} Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.725979 4793 scope.go:117] "RemoveContainer" containerID="2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.725527 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwxx5" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.759296 4793 scope.go:117] "RemoveContainer" containerID="bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.762234 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.772870 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dwxx5"] Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.790969 4793 scope.go:117] "RemoveContainer" containerID="4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.814653 4793 scope.go:117] "RemoveContainer" containerID="2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878" Jan 26 22:43:08 crc kubenswrapper[4793]: E0126 22:43:08.815294 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878\": container with ID starting with 2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878 not found: ID does not exist" containerID="2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.815344 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878"} err="failed to get container status \"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878\": rpc error: code = NotFound desc = could not find container \"2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878\": container with ID starting with 2f5ec843b208072447a387a5067b3eb105cca68c9b8ba216a35cf6058f355878 not found: ID does not exist" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.815383 4793 scope.go:117] "RemoveContainer" containerID="bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065" Jan 26 22:43:08 crc kubenswrapper[4793]: E0126 22:43:08.815774 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065\": container with ID starting with bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065 not found: ID does not exist" containerID="bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.815792 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065"} err="failed to get container status \"bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065\": rpc error: code = NotFound desc = could not find container \"bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065\": container with ID starting with bed291dd0f9fce29a9ab1fa83bb9fd135a0abe6baa8876daeb2b187275f78065 not found: ID does not exist" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.815827 4793 scope.go:117] "RemoveContainer" containerID="4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d" Jan 26 22:43:08 crc kubenswrapper[4793]: E0126 22:43:08.816076 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d\": container with ID starting with 4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d not found: ID does not exist" containerID="4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.816093 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d"} err="failed to get container status \"4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d\": rpc error: code = NotFound desc = could not find container \"4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d\": container with ID starting with 4011ee6a69855743693760e7e33b998121986206fc27959b5cf1199c255c533d not found: ID does not exist" Jan 26 22:43:08 crc kubenswrapper[4793]: I0126 22:43:08.963245 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 22:43:08 crc kubenswrapper[4793]: W0126 22:43:08.972886 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod013dc545_6073_4075_833b_e3e50bf89413.slice/crio-3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb WatchSource:0}: Error finding container 3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb: Status 404 returned error can't find the container with id 3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.734939 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerStarted","Data":"259312878a124238b69d304541bda4609b01b2ae8eda7e0b801b983fe6b8048d"} Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.738512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"013dc545-6073-4075-833b-e3e50bf89413","Type":"ContainerStarted","Data":"fcccf55a3af675341dd84535250133acc685d822aba196eb2b52bfe7ef72912c"} Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.738564 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"013dc545-6073-4075-833b-e3e50bf89413","Type":"ContainerStarted","Data":"3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb"} Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.742670 4793 generic.go:334] "Generic (PLEG): container finished" podID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerID="f47964ab77003c1812c68563a6ab332512b80c93bca84785abf6954776142bb0" exitCode=0 Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.742741 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerDied","Data":"f47964ab77003c1812c68563a6ab332512b80c93bca84785abf6954776142bb0"} Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.770435 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" path="/var/lib/kubelet/pods/1745bb39-265a-4318-8ede-bd919dc967d0/volumes" Jan 26 22:43:09 crc kubenswrapper[4793]: I0126 22:43:09.795080 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.795055122 podStartE2EDuration="1.795055122s" podCreationTimestamp="2026-01-26 22:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:09.789260165 +0000 UTC m=+204.778031687" watchObservedRunningTime="2026-01-26 22:43:09.795055122 +0000 UTC m=+204.783826654" Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.568751 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.569647 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ptb4m" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="registry-server" containerID="cri-o://50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7" gracePeriod=2 Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.751701 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerStarted","Data":"55f45e1681d34762f5dc72f1975ec4d09f5e4114953396254618a8f16aef59e4"} Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.754875 4793 generic.go:334] "Generic (PLEG): container finished" podID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerID="259312878a124238b69d304541bda4609b01b2ae8eda7e0b801b983fe6b8048d" exitCode=0 Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.754992 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerDied","Data":"259312878a124238b69d304541bda4609b01b2ae8eda7e0b801b983fe6b8048d"} Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.757915 4793 generic.go:334] "Generic (PLEG): container finished" podID="013dc545-6073-4075-833b-e3e50bf89413" containerID="fcccf55a3af675341dd84535250133acc685d822aba196eb2b52bfe7ef72912c" exitCode=0 Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.757982 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"013dc545-6073-4075-833b-e3e50bf89413","Type":"ContainerDied","Data":"fcccf55a3af675341dd84535250133acc685d822aba196eb2b52bfe7ef72912c"} Jan 26 22:43:10 crc kubenswrapper[4793]: I0126 22:43:10.775386 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rmxpm" podStartSLOduration=3.337995533 podStartE2EDuration="47.775363378s" podCreationTimestamp="2026-01-26 22:42:23 +0000 UTC" firstStartedPulling="2026-01-26 22:42:25.698461333 +0000 UTC m=+160.687232845" lastFinishedPulling="2026-01-26 22:43:10.135829178 +0000 UTC m=+205.124600690" observedRunningTime="2026-01-26 22:43:10.770771412 +0000 UTC m=+205.759542944" watchObservedRunningTime="2026-01-26 22:43:10.775363378 +0000 UTC m=+205.764134910" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.718116 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.776064 4793 generic.go:334] "Generic (PLEG): container finished" podID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerID="50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7" exitCode=0 Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.776222 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptb4m" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.778467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerStarted","Data":"a1da675f56afc66f10254ae8acb8acfbb6033d5b14e0b5831b35b1c1d907371f"} Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.778506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerDied","Data":"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7"} Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.778550 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptb4m" event={"ID":"2fbd096a-9989-4aa8-8c4d-e77ca47aee86","Type":"ContainerDied","Data":"21253583ba7cd52b82e807bf5d28038f8e777d6d397df9fcae9a16cd614b0f2c"} Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.778578 4793 scope.go:117] "RemoveContainer" containerID="50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.805272 4793 scope.go:117] "RemoveContainer" containerID="1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.828731 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s2bff" podStartSLOduration=3.458546559 podStartE2EDuration="45.828692921s" podCreationTimestamp="2026-01-26 22:42:26 +0000 UTC" firstStartedPulling="2026-01-26 22:42:29.087149991 +0000 UTC m=+164.075921503" lastFinishedPulling="2026-01-26 22:43:11.457296353 +0000 UTC m=+206.446067865" observedRunningTime="2026-01-26 22:43:11.827659906 +0000 UTC m=+206.816431418" watchObservedRunningTime="2026-01-26 22:43:11.828692921 +0000 UTC m=+206.817464433" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.838458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlftt\" (UniqueName: \"kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt\") pod \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.838624 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities\") pod \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.838649 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content\") pod \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\" (UID: \"2fbd096a-9989-4aa8-8c4d-e77ca47aee86\") " Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.839904 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities" (OuterVolumeSpecName: "utilities") pod "2fbd096a-9989-4aa8-8c4d-e77ca47aee86" (UID: "2fbd096a-9989-4aa8-8c4d-e77ca47aee86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.847254 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt" (OuterVolumeSpecName: "kube-api-access-jlftt") pod "2fbd096a-9989-4aa8-8c4d-e77ca47aee86" (UID: "2fbd096a-9989-4aa8-8c4d-e77ca47aee86"). InnerVolumeSpecName "kube-api-access-jlftt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.855748 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.855785 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlftt\" (UniqueName: \"kubernetes.io/projected/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-kube-api-access-jlftt\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.858002 4793 scope.go:117] "RemoveContainer" containerID="b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.897885 4793 scope.go:117] "RemoveContainer" containerID="50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7" Jan 26 22:43:11 crc kubenswrapper[4793]: E0126 22:43:11.913799 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7\": container with ID starting with 50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7 not found: ID does not exist" containerID="50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.913905 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7"} err="failed to get container status \"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7\": rpc error: code = NotFound desc = could not find container \"50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7\": container with ID starting with 50073214fbcf80b970c982fda1bd2daaa4d38847260aabc4d7de1e170e2588b7 not found: ID does not exist" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.914354 4793 scope.go:117] "RemoveContainer" containerID="1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d" Jan 26 22:43:11 crc kubenswrapper[4793]: E0126 22:43:11.920035 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d\": container with ID starting with 1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d not found: ID does not exist" containerID="1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.920106 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d"} err="failed to get container status \"1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d\": rpc error: code = NotFound desc = could not find container \"1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d\": container with ID starting with 1465085f9d0d5e083e962166693b9df09dd501d1870a56d5375b170e0c99986d not found: ID does not exist" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.920149 4793 scope.go:117] "RemoveContainer" containerID="b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec" Jan 26 22:43:11 crc kubenswrapper[4793]: E0126 22:43:11.920955 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec\": container with ID starting with b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec not found: ID does not exist" containerID="b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.920979 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec"} err="failed to get container status \"b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec\": rpc error: code = NotFound desc = could not find container \"b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec\": container with ID starting with b304947f54f99be81a30ff6485558ffb1584de7d232039f73c545f74cd7850ec not found: ID does not exist" Jan 26 22:43:11 crc kubenswrapper[4793]: I0126 22:43:11.978284 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fbd096a-9989-4aa8-8c4d-e77ca47aee86" (UID: "2fbd096a-9989-4aa8-8c4d-e77ca47aee86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.058938 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fbd096a-9989-4aa8-8c4d-e77ca47aee86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.146443 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.150098 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.156612 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ptb4m"] Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.261050 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir\") pod \"013dc545-6073-4075-833b-e3e50bf89413\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.261114 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access\") pod \"013dc545-6073-4075-833b-e3e50bf89413\" (UID: \"013dc545-6073-4075-833b-e3e50bf89413\") " Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.261222 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "013dc545-6073-4075-833b-e3e50bf89413" (UID: "013dc545-6073-4075-833b-e3e50bf89413"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.261640 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013dc545-6073-4075-833b-e3e50bf89413-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.267133 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "013dc545-6073-4075-833b-e3e50bf89413" (UID: "013dc545-6073-4075-833b-e3e50bf89413"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.363151 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013dc545-6073-4075-833b-e3e50bf89413-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.783366 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerStarted","Data":"a1c6048ea8865fb8c9c51c5203790e66b0db46dbc02e36a248d91ae4039fe139"} Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.785142 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerStarted","Data":"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857"} Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.787033 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.787018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"013dc545-6073-4075-833b-e3e50bf89413","Type":"ContainerDied","Data":"3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb"} Jan 26 22:43:12 crc kubenswrapper[4793]: I0126 22:43:12.787456 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e24b766b7d3fbfde8acc08b3579c572ccd2d7d77eb6b2462e536cf8abfa5edb" Jan 26 22:43:13 crc kubenswrapper[4793]: I0126 22:43:13.772391 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" path="/var/lib/kubelet/pods/2fbd096a-9989-4aa8-8c4d-e77ca47aee86/volumes" Jan 26 22:43:13 crc kubenswrapper[4793]: I0126 22:43:13.795164 4793 generic.go:334] "Generic (PLEG): container finished" podID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerID="a1c6048ea8865fb8c9c51c5203790e66b0db46dbc02e36a248d91ae4039fe139" exitCode=0 Jan 26 22:43:13 crc kubenswrapper[4793]: I0126 22:43:13.796142 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerDied","Data":"a1c6048ea8865fb8c9c51c5203790e66b0db46dbc02e36a248d91ae4039fe139"} Jan 26 22:43:13 crc kubenswrapper[4793]: I0126 22:43:13.800301 4793 generic.go:334] "Generic (PLEG): container finished" podID="ca21a12c-d4c3-414b-816c-858756e16147" containerID="a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857" exitCode=0 Jan 26 22:43:13 crc kubenswrapper[4793]: I0126 22:43:13.800437 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerDied","Data":"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857"} Jan 26 22:43:14 crc kubenswrapper[4793]: I0126 22:43:14.192383 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:43:14 crc kubenswrapper[4793]: I0126 22:43:14.192460 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:43:14 crc kubenswrapper[4793]: I0126 22:43:14.251272 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.211906 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214043 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="extract-utilities" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214086 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="extract-utilities" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214105 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="extract-content" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214120 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="extract-content" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214137 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="extract-content" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214152 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="extract-content" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214225 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214242 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214265 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214282 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214305 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="extract-utilities" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214322 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="extract-utilities" Jan 26 22:43:16 crc kubenswrapper[4793]: E0126 22:43:16.214348 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013dc545-6073-4075-833b-e3e50bf89413" containerName="pruner" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214364 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="013dc545-6073-4075-833b-e3e50bf89413" containerName="pruner" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214572 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbd096a-9989-4aa8-8c4d-e77ca47aee86" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214600 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1745bb39-265a-4318-8ede-bd919dc967d0" containerName="registry-server" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.214627 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="013dc545-6073-4075-833b-e3e50bf89413" containerName="pruner" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.215526 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.218869 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.221547 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.237465 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.320320 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.320392 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.320642 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.421890 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.421960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.422011 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.422074 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.422136 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.465145 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access\") pod \"installer-9-crc\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.552994 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.820066 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerStarted","Data":"bd6863c8a9c27707c6dccdad20d47ee96c33964b5f559bcf79975f2bd5c9666c"} Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.821658 4793 generic.go:334] "Generic (PLEG): container finished" podID="99c0251d-b287-4c00-a392-c43b8164e73d" containerID="dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88" exitCode=0 Jan 26 22:43:16 crc kubenswrapper[4793]: I0126 22:43:16.821688 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerDied","Data":"dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88"} Jan 26 22:43:17 crc kubenswrapper[4793]: I0126 22:43:17.223357 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:43:17 crc kubenswrapper[4793]: I0126 22:43:17.224014 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:43:17 crc kubenswrapper[4793]: I0126 22:43:17.781073 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 22:43:17 crc kubenswrapper[4793]: I0126 22:43:17.831242 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dbe90ef5-9518-4ec1-b9b6-479bf9732ded","Type":"ContainerStarted","Data":"86d64a9898e0bc215d83fa5f31fbdb9e7c7bf15338d21b31a5a1c977a3cf778e"} Jan 26 22:43:17 crc kubenswrapper[4793]: I0126 22:43:17.854514 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-78ml2" podStartSLOduration=4.592942649 podStartE2EDuration="54.854495608s" podCreationTimestamp="2026-01-26 22:42:23 +0000 UTC" firstStartedPulling="2026-01-26 22:42:25.699545646 +0000 UTC m=+160.688317198" lastFinishedPulling="2026-01-26 22:43:15.961098605 +0000 UTC m=+210.949870157" observedRunningTime="2026-01-26 22:43:17.853497164 +0000 UTC m=+212.842268686" watchObservedRunningTime="2026-01-26 22:43:17.854495608 +0000 UTC m=+212.843267130" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.269536 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s2bff" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="registry-server" probeResult="failure" output=< Jan 26 22:43:18 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 26 22:43:18 crc kubenswrapper[4793]: > Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.322418 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.322483 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.322530 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.323075 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.323135 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944" gracePeriod=600 Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.841091 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerStarted","Data":"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57"} Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.844936 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dbe90ef5-9518-4ec1-b9b6-479bf9732ded","Type":"ContainerStarted","Data":"ae86aae5bad8ac03faa82ea1252a93b8958791a84a7892d11dbd7cc6c74b423c"} Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.847793 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944" exitCode=0 Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.847864 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944"} Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.850345 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerStarted","Data":"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98"} Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.875573 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nksn9" podStartSLOduration=3.254315015 podStartE2EDuration="54.875550052s" podCreationTimestamp="2026-01-26 22:42:24 +0000 UTC" firstStartedPulling="2026-01-26 22:42:25.702671611 +0000 UTC m=+160.691443123" lastFinishedPulling="2026-01-26 22:43:17.323906648 +0000 UTC m=+212.312678160" observedRunningTime="2026-01-26 22:43:18.873031856 +0000 UTC m=+213.861803388" watchObservedRunningTime="2026-01-26 22:43:18.875550052 +0000 UTC m=+213.864321574" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.901604 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.901576038 podStartE2EDuration="2.901576038s" podCreationTimestamp="2026-01-26 22:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:18.897404816 +0000 UTC m=+213.886176408" watchObservedRunningTime="2026-01-26 22:43:18.901576038 +0000 UTC m=+213.890347590" Jan 26 22:43:18 crc kubenswrapper[4793]: I0126 22:43:18.921959 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xlnl6" podStartSLOduration=2.340758543 podStartE2EDuration="52.921937751s" podCreationTimestamp="2026-01-26 22:42:26 +0000 UTC" firstStartedPulling="2026-01-26 22:42:28.085265572 +0000 UTC m=+163.074037074" lastFinishedPulling="2026-01-26 22:43:18.66644473 +0000 UTC m=+213.655216282" observedRunningTime="2026-01-26 22:43:18.921521897 +0000 UTC m=+213.910293419" watchObservedRunningTime="2026-01-26 22:43:18.921937751 +0000 UTC m=+213.910709273" Jan 26 22:43:19 crc kubenswrapper[4793]: I0126 22:43:19.860825 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf"} Jan 26 22:43:22 crc kubenswrapper[4793]: I0126 22:43:22.312744 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:22 crc kubenswrapper[4793]: I0126 22:43:22.314280 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerName="controller-manager" containerID="cri-o://c8090fdd1369c91b7dd744e02ec61db84949669bb837332b7d8599799222e44f" gracePeriod=30 Jan 26 22:43:22 crc kubenswrapper[4793]: I0126 22:43:22.330049 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:22 crc kubenswrapper[4793]: I0126 22:43:22.330562 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" podUID="12030821-2df1-4742-8879-2885ea6315da" containerName="route-controller-manager" containerID="cri-o://53dcef64e09d644b0cdaca29294dae740c9b20757947cb7581ca4de5d32dd853" gracePeriod=30 Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.916823 4793 generic.go:334] "Generic (PLEG): container finished" podID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerID="c8090fdd1369c91b7dd744e02ec61db84949669bb837332b7d8599799222e44f" exitCode=0 Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.916912 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" event={"ID":"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a","Type":"ContainerDied","Data":"c8090fdd1369c91b7dd744e02ec61db84949669bb837332b7d8599799222e44f"} Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.920169 4793 generic.go:334] "Generic (PLEG): container finished" podID="12030821-2df1-4742-8879-2885ea6315da" containerID="53dcef64e09d644b0cdaca29294dae740c9b20757947cb7581ca4de5d32dd853" exitCode=0 Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.920258 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" event={"ID":"12030821-2df1-4742-8879-2885ea6315da","Type":"ContainerDied","Data":"53dcef64e09d644b0cdaca29294dae740c9b20757947cb7581ca4de5d32dd853"} Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.995453 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:43:23 crc kubenswrapper[4793]: I0126 22:43:23.995532 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.020457 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.062261 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:24 crc kubenswrapper[4793]: E0126 22:43:24.062509 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12030821-2df1-4742-8879-2885ea6315da" containerName="route-controller-manager" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.062526 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="12030821-2df1-4742-8879-2885ea6315da" containerName="route-controller-manager" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.062668 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="12030821-2df1-4742-8879-2885ea6315da" containerName="route-controller-manager" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.063436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.069982 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.097003 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.141790 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.176565 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert\") pod \"12030821-2df1-4742-8879-2885ea6315da\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.176672 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config\") pod \"12030821-2df1-4742-8879-2885ea6315da\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.176703 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdwct\" (UniqueName: \"kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct\") pod \"12030821-2df1-4742-8879-2885ea6315da\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.176763 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca\") pod \"12030821-2df1-4742-8879-2885ea6315da\" (UID: \"12030821-2df1-4742-8879-2885ea6315da\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.178151 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config" (OuterVolumeSpecName: "config") pod "12030821-2df1-4742-8879-2885ea6315da" (UID: "12030821-2df1-4742-8879-2885ea6315da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.178169 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca" (OuterVolumeSpecName: "client-ca") pod "12030821-2df1-4742-8879-2885ea6315da" (UID: "12030821-2df1-4742-8879-2885ea6315da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.183793 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct" (OuterVolumeSpecName: "kube-api-access-sdwct") pod "12030821-2df1-4742-8879-2885ea6315da" (UID: "12030821-2df1-4742-8879-2885ea6315da"). InnerVolumeSpecName "kube-api-access-sdwct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.183963 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "12030821-2df1-4742-8879-2885ea6315da" (UID: "12030821-2df1-4742-8879-2885ea6315da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.232487 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278386 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config\") pod \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278483 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert\") pod \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278540 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles\") pod \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptx2f\" (UniqueName: \"kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f\") pod \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278585 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca\") pod \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\" (UID: \"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a\") " Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278736 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjd8s\" (UniqueName: \"kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278767 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278800 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278817 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278874 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12030821-2df1-4742-8879-2885ea6315da-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278885 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278895 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdwct\" (UniqueName: \"kubernetes.io/projected/12030821-2df1-4742-8879-2885ea6315da-kube-api-access-sdwct\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.278902 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12030821-2df1-4742-8879-2885ea6315da-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.279628 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" (UID: "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.279830 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca" (OuterVolumeSpecName: "client-ca") pod "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" (UID: "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.280269 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config" (OuterVolumeSpecName: "config") pod "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" (UID: "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.281506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" (UID: "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.282124 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f" (OuterVolumeSpecName: "kube-api-access-ptx2f") pod "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" (UID: "b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a"). InnerVolumeSpecName "kube-api-access-ptx2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380389 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjd8s\" (UniqueName: \"kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380480 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380554 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380611 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380713 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptx2f\" (UniqueName: \"kubernetes.io/projected/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-kube-api-access-ptx2f\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380758 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380777 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380796 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.380813 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.381956 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.382130 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.386890 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.388665 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.388722 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.399835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjd8s\" (UniqueName: \"kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s\") pod \"route-controller-manager-84b9f7f9fd-f8g4w\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.429092 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.443708 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.694347 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:24 crc kubenswrapper[4793]: W0126 22:43:24.706033 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc577754_9b71_4135_b5ca_3b2255b30af0.slice/crio-653bb91fdc6e280092de13df8db2c0a054813eb903268f7ada80f2ba39af69c1 WatchSource:0}: Error finding container 653bb91fdc6e280092de13df8db2c0a054813eb903268f7ada80f2ba39af69c1: Status 404 returned error can't find the container with id 653bb91fdc6e280092de13df8db2c0a054813eb903268f7ada80f2ba39af69c1 Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.928427 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" event={"ID":"fc577754-9b71-4135-b5ca-3b2255b30af0","Type":"ContainerStarted","Data":"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30"} Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.928485 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" event={"ID":"fc577754-9b71-4135-b5ca-3b2255b30af0","Type":"ContainerStarted","Data":"653bb91fdc6e280092de13df8db2c0a054813eb903268f7ada80f2ba39af69c1"} Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.928718 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.930585 4793 patch_prober.go:28] interesting pod/route-controller-manager-84b9f7f9fd-f8g4w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.930701 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.932569 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" event={"ID":"b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a","Type":"ContainerDied","Data":"45b69ac25c01759287e2d0a5eea45d0fa774df75dbd92ec0b24b338a87d0949d"} Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.932609 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.932701 4793 scope.go:117] "RemoveContainer" containerID="c8090fdd1369c91b7dd744e02ec61db84949669bb837332b7d8599799222e44f" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.935016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" event={"ID":"12030821-2df1-4742-8879-2885ea6315da","Type":"ContainerDied","Data":"59cadafa8d535574a7b73e67918ff7235119f1b552a808030d700e00a0cd95a9"} Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.935260 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.967144 4793 scope.go:117] "RemoveContainer" containerID="53dcef64e09d644b0cdaca29294dae740c9b20757947cb7581ca4de5d32dd853" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.987250 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" podStartSLOduration=2.987225833 podStartE2EDuration="2.987225833s" podCreationTimestamp="2026-01-26 22:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:24.957870383 +0000 UTC m=+219.946641895" watchObservedRunningTime="2026-01-26 22:43:24.987225833 +0000 UTC m=+219.975997355" Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.992254 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:24 crc kubenswrapper[4793]: I0126 22:43:24.994867 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695d9c55d-2wvxs"] Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.005936 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.010122 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.010737 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84c7c9c945-fvjfr"] Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.024250 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.131630 4793 patch_prober.go:28] interesting pod/controller-manager-84c7c9c945-fvjfr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: i/o timeout" start-of-body= Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.131694 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84c7c9c945-fvjfr" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: i/o timeout" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.599513 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.769014 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12030821-2df1-4742-8879-2885ea6315da" path="/var/lib/kubelet/pods/12030821-2df1-4742-8879-2885ea6315da/volumes" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.769678 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" path="/var/lib/kubelet/pods/b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a/volumes" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.949424 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:25 crc kubenswrapper[4793]: I0126 22:43:25.968461 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.483765 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:26 crc kubenswrapper[4793]: E0126 22:43:26.484594 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerName="controller-manager" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.484617 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerName="controller-manager" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.484794 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a11c9b-3ec7-4f1f-bff1-4a07e751bc9a" containerName="controller-manager" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.485789 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.490066 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.490806 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.490891 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.490892 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.490897 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.491516 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.502821 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.504853 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.537828 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.537944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.538026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.538118 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp795\" (UniqueName: \"kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.538176 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.607398 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.608520 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.639237 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.639324 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.639384 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.639445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp795\" (UniqueName: \"kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.639484 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.641373 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.641524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.641583 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.657967 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.661823 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp795\" (UniqueName: \"kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795\") pod \"controller-manager-778cdbc9d6-mtl75\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.683157 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.846006 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:26 crc kubenswrapper[4793]: I0126 22:43:26.954244 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nksn9" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="registry-server" containerID="cri-o://78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57" gracePeriod=2 Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.003849 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.160402 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.287864 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.331454 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.341246 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.350273 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content\") pod \"ca21a12c-d4c3-414b-816c-858756e16147\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.412869 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca21a12c-d4c3-414b-816c-858756e16147" (UID: "ca21a12c-d4c3-414b-816c-858756e16147"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.451673 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8sb6\" (UniqueName: \"kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6\") pod \"ca21a12c-d4c3-414b-816c-858756e16147\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.451734 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities\") pod \"ca21a12c-d4c3-414b-816c-858756e16147\" (UID: \"ca21a12c-d4c3-414b-816c-858756e16147\") " Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.451999 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.452578 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities" (OuterVolumeSpecName: "utilities") pod "ca21a12c-d4c3-414b-816c-858756e16147" (UID: "ca21a12c-d4c3-414b-816c-858756e16147"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.457208 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6" (OuterVolumeSpecName: "kube-api-access-v8sb6") pod "ca21a12c-d4c3-414b-816c-858756e16147" (UID: "ca21a12c-d4c3-414b-816c-858756e16147"). InnerVolumeSpecName "kube-api-access-v8sb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.553841 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8sb6\" (UniqueName: \"kubernetes.io/projected/ca21a12c-d4c3-414b-816c-858756e16147-kube-api-access-v8sb6\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.554406 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca21a12c-d4c3-414b-816c-858756e16147-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.961449 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" event={"ID":"1919941b-c99f-483f-957d-eac3d9c10d76","Type":"ContainerStarted","Data":"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba"} Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.961529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" event={"ID":"1919941b-c99f-483f-957d-eac3d9c10d76","Type":"ContainerStarted","Data":"0b072c7b5c8d4bf3ec078448030b74f78bd05e8468581b61acdb9d6e7206815a"} Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.961710 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.963649 4793 generic.go:334] "Generic (PLEG): container finished" podID="ca21a12c-d4c3-414b-816c-858756e16147" containerID="78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57" exitCode=0 Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.963737 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nksn9" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.963726 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerDied","Data":"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57"} Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.963803 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nksn9" event={"ID":"ca21a12c-d4c3-414b-816c-858756e16147","Type":"ContainerDied","Data":"23aef63c8314502249bed5cdebc0d033c6303855f2021cfbe103e4220fb0c8cd"} Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.963839 4793 scope.go:117] "RemoveContainer" containerID="78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.971650 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.984382 4793 scope.go:117] "RemoveContainer" containerID="a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857" Jan 26 22:43:27 crc kubenswrapper[4793]: I0126 22:43:27.987018 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" podStartSLOduration=5.986994724 podStartE2EDuration="5.986994724s" podCreationTimestamp="2026-01-26 22:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:27.985849515 +0000 UTC m=+222.974621027" watchObservedRunningTime="2026-01-26 22:43:27.986994724 +0000 UTC m=+222.975766246" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.005982 4793 scope.go:117] "RemoveContainer" containerID="3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.035484 4793 scope.go:117] "RemoveContainer" containerID="78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57" Jan 26 22:43:28 crc kubenswrapper[4793]: E0126 22:43:28.036036 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57\": container with ID starting with 78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57 not found: ID does not exist" containerID="78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.036069 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57"} err="failed to get container status \"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57\": rpc error: code = NotFound desc = could not find container \"78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57\": container with ID starting with 78557cb80aaf999a1efe376d0c55baedb6019afcf9c5410256cadf24cf42ed57 not found: ID does not exist" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.036093 4793 scope.go:117] "RemoveContainer" containerID="a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857" Jan 26 22:43:28 crc kubenswrapper[4793]: E0126 22:43:28.036592 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857\": container with ID starting with a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857 not found: ID does not exist" containerID="a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.036683 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857"} err="failed to get container status \"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857\": rpc error: code = NotFound desc = could not find container \"a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857\": container with ID starting with a3c4ae7623e287fd7f6ba21100735e12f703b885b3499d723b76e0f9839e7857 not found: ID does not exist" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.036723 4793 scope.go:117] "RemoveContainer" containerID="3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2" Jan 26 22:43:28 crc kubenswrapper[4793]: E0126 22:43:28.037334 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2\": container with ID starting with 3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2 not found: ID does not exist" containerID="3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.037385 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2"} err="failed to get container status \"3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2\": rpc error: code = NotFound desc = could not find container \"3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2\": container with ID starting with 3a6a8c3db06eb192af175b0b831948bebe1a38d7235621521423f00b6264fcc2 not found: ID does not exist" Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.063277 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:43:28 crc kubenswrapper[4793]: I0126 22:43:28.070810 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nksn9"] Jan 26 22:43:29 crc kubenswrapper[4793]: I0126 22:43:29.767885 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca21a12c-d4c3-414b-816c-858756e16147" path="/var/lib/kubelet/pods/ca21a12c-d4c3-414b-816c-858756e16147/volumes" Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.367725 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.368558 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xlnl6" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="registry-server" containerID="cri-o://6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98" gracePeriod=2 Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.847109 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.987087 4793 generic.go:334] "Generic (PLEG): container finished" podID="99c0251d-b287-4c00-a392-c43b8164e73d" containerID="6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98" exitCode=0 Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.987141 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerDied","Data":"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98"} Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.987178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlnl6" event={"ID":"99c0251d-b287-4c00-a392-c43b8164e73d","Type":"ContainerDied","Data":"18efb665e88f94cc0c7aef3bce9135f6a8b698b34f0bd84750c1350537a84b4c"} Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.987216 4793 scope.go:117] "RemoveContainer" containerID="6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98" Jan 26 22:43:30 crc kubenswrapper[4793]: I0126 22:43:30.987224 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlnl6" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.008636 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content\") pod \"99c0251d-b287-4c00-a392-c43b8164e73d\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.008730 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpdhr\" (UniqueName: \"kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr\") pod \"99c0251d-b287-4c00-a392-c43b8164e73d\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.008801 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities\") pod \"99c0251d-b287-4c00-a392-c43b8164e73d\" (UID: \"99c0251d-b287-4c00-a392-c43b8164e73d\") " Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.010843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities" (OuterVolumeSpecName: "utilities") pod "99c0251d-b287-4c00-a392-c43b8164e73d" (UID: "99c0251d-b287-4c00-a392-c43b8164e73d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.015285 4793 scope.go:117] "RemoveContainer" containerID="dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.018412 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr" (OuterVolumeSpecName: "kube-api-access-rpdhr") pod "99c0251d-b287-4c00-a392-c43b8164e73d" (UID: "99c0251d-b287-4c00-a392-c43b8164e73d"). InnerVolumeSpecName "kube-api-access-rpdhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.042280 4793 scope.go:117] "RemoveContainer" containerID="202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.051752 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99c0251d-b287-4c00-a392-c43b8164e73d" (UID: "99c0251d-b287-4c00-a392-c43b8164e73d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.062692 4793 scope.go:117] "RemoveContainer" containerID="6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98" Jan 26 22:43:31 crc kubenswrapper[4793]: E0126 22:43:31.063475 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98\": container with ID starting with 6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98 not found: ID does not exist" containerID="6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.063536 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98"} err="failed to get container status \"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98\": rpc error: code = NotFound desc = could not find container \"6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98\": container with ID starting with 6ed2bb489aeda6f679e1a6f7280e2c90418811f20bbd0b69181e1834b1c07b98 not found: ID does not exist" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.063571 4793 scope.go:117] "RemoveContainer" containerID="dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88" Jan 26 22:43:31 crc kubenswrapper[4793]: E0126 22:43:31.064118 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88\": container with ID starting with dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88 not found: ID does not exist" containerID="dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.064269 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88"} err="failed to get container status \"dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88\": rpc error: code = NotFound desc = could not find container \"dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88\": container with ID starting with dfa82ba8e30e94ad7515e018e23f0c12b0c126cd97491d1293d7df255ae07c88 not found: ID does not exist" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.064387 4793 scope.go:117] "RemoveContainer" containerID="202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437" Jan 26 22:43:31 crc kubenswrapper[4793]: E0126 22:43:31.064822 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437\": container with ID starting with 202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437 not found: ID does not exist" containerID="202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.064940 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437"} err="failed to get container status \"202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437\": rpc error: code = NotFound desc = could not find container \"202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437\": container with ID starting with 202853395a904340f93137f4b0c85c5302f20e05e85d4ff51c5667af14431437 not found: ID does not exist" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.110750 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.110870 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpdhr\" (UniqueName: \"kubernetes.io/projected/99c0251d-b287-4c00-a392-c43b8164e73d-kube-api-access-rpdhr\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.110992 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c0251d-b287-4c00-a392-c43b8164e73d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.334842 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.340584 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlnl6"] Jan 26 22:43:31 crc kubenswrapper[4793]: I0126 22:43:31.769992 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" path="/var/lib/kubelet/pods/99c0251d-b287-4c00-a392-c43b8164e73d/volumes" Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.346357 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.347433 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" podUID="1919941b-c99f-483f-957d-eac3d9c10d76" containerName="controller-manager" containerID="cri-o://14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba" gracePeriod=30 Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.462559 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.463157 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerName="route-controller-manager" containerID="cri-o://b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30" gracePeriod=30 Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.928317 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:42 crc kubenswrapper[4793]: I0126 22:43:42.931998 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082322 4793 generic.go:334] "Generic (PLEG): container finished" podID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerID="b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30" exitCode=0 Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082387 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" event={"ID":"fc577754-9b71-4135-b5ca-3b2255b30af0","Type":"ContainerDied","Data":"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30"} Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082416 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082442 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w" event={"ID":"fc577754-9b71-4135-b5ca-3b2255b30af0","Type":"ContainerDied","Data":"653bb91fdc6e280092de13df8db2c0a054813eb903268f7ada80f2ba39af69c1"} Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082467 4793 scope.go:117] "RemoveContainer" containerID="b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082948 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca\") pod \"fc577754-9b71-4135-b5ca-3b2255b30af0\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.082997 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjd8s\" (UniqueName: \"kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s\") pod \"fc577754-9b71-4135-b5ca-3b2255b30af0\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.083045 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles\") pod \"1919941b-c99f-483f-957d-eac3d9c10d76\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.083074 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp795\" (UniqueName: \"kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795\") pod \"1919941b-c99f-483f-957d-eac3d9c10d76\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.083109 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert\") pod \"1919941b-c99f-483f-957d-eac3d9c10d76\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084445 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca\") pod \"1919941b-c99f-483f-957d-eac3d9c10d76\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084152 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc577754-9b71-4135-b5ca-3b2255b30af0" (UID: "fc577754-9b71-4135-b5ca-3b2255b30af0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084489 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config\") pod \"fc577754-9b71-4135-b5ca-3b2255b30af0\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084521 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert\") pod \"fc577754-9b71-4135-b5ca-3b2255b30af0\" (UID: \"fc577754-9b71-4135-b5ca-3b2255b30af0\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084548 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config\") pod \"1919941b-c99f-483f-957d-eac3d9c10d76\" (UID: \"1919941b-c99f-483f-957d-eac3d9c10d76\") " Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084859 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084471 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1919941b-c99f-483f-957d-eac3d9c10d76" (UID: "1919941b-c99f-483f-957d-eac3d9c10d76"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.084946 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca" (OuterVolumeSpecName: "client-ca") pod "1919941b-c99f-483f-957d-eac3d9c10d76" (UID: "1919941b-c99f-483f-957d-eac3d9c10d76"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.085002 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config" (OuterVolumeSpecName: "config") pod "fc577754-9b71-4135-b5ca-3b2255b30af0" (UID: "fc577754-9b71-4135-b5ca-3b2255b30af0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.085464 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config" (OuterVolumeSpecName: "config") pod "1919941b-c99f-483f-957d-eac3d9c10d76" (UID: "1919941b-c99f-483f-957d-eac3d9c10d76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.087513 4793 generic.go:334] "Generic (PLEG): container finished" podID="1919941b-c99f-483f-957d-eac3d9c10d76" containerID="14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba" exitCode=0 Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.087557 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" event={"ID":"1919941b-c99f-483f-957d-eac3d9c10d76","Type":"ContainerDied","Data":"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba"} Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.087586 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" event={"ID":"1919941b-c99f-483f-957d-eac3d9c10d76","Type":"ContainerDied","Data":"0b072c7b5c8d4bf3ec078448030b74f78bd05e8468581b61acdb9d6e7206815a"} Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.087614 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778cdbc9d6-mtl75" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.089723 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc577754-9b71-4135-b5ca-3b2255b30af0" (UID: "fc577754-9b71-4135-b5ca-3b2255b30af0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.089790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s" (OuterVolumeSpecName: "kube-api-access-tjd8s") pod "fc577754-9b71-4135-b5ca-3b2255b30af0" (UID: "fc577754-9b71-4135-b5ca-3b2255b30af0"). InnerVolumeSpecName "kube-api-access-tjd8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.089849 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795" (OuterVolumeSpecName: "kube-api-access-lp795") pod "1919941b-c99f-483f-957d-eac3d9c10d76" (UID: "1919941b-c99f-483f-957d-eac3d9c10d76"). InnerVolumeSpecName "kube-api-access-lp795". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.092130 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1919941b-c99f-483f-957d-eac3d9c10d76" (UID: "1919941b-c99f-483f-957d-eac3d9c10d76"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.114936 4793 scope.go:117] "RemoveContainer" containerID="b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.115770 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30\": container with ID starting with b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30 not found: ID does not exist" containerID="b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.116058 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30"} err="failed to get container status \"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30\": rpc error: code = NotFound desc = could not find container \"b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30\": container with ID starting with b35caa64e6d9db4b5ae9873695266e6ff86def52c563eda03ccc44468c7bea30 not found: ID does not exist" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.116106 4793 scope.go:117] "RemoveContainer" containerID="14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.130539 4793 scope.go:117] "RemoveContainer" containerID="14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.131165 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba\": container with ID starting with 14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba not found: ID does not exist" containerID="14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.131234 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba"} err="failed to get container status \"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba\": rpc error: code = NotFound desc = could not find container \"14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba\": container with ID starting with 14d59da3687b7822443e30a557679adf61c60fd5c303d7bad948e5bc15aefaba not found: ID does not exist" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185342 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc577754-9b71-4135-b5ca-3b2255b30af0-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185382 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc577754-9b71-4135-b5ca-3b2255b30af0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185395 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185407 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjd8s\" (UniqueName: \"kubernetes.io/projected/fc577754-9b71-4135-b5ca-3b2255b30af0-kube-api-access-tjd8s\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185422 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185434 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp795\" (UniqueName: \"kubernetes.io/projected/1919941b-c99f-483f-957d-eac3d9c10d76-kube-api-access-lp795\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185445 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919941b-c99f-483f-957d-eac3d9c10d76-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.185456 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1919941b-c99f-483f-957d-eac3d9c10d76-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.412526 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.416077 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84b9f7f9fd-f8g4w"] Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.426490 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.430449 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-778cdbc9d6-mtl75"] Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.487997 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn"] Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488352 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="extract-content" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488375 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="extract-content" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488397 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488406 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488418 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="extract-content" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488426 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="extract-content" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488439 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488446 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488457 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1919941b-c99f-483f-957d-eac3d9c10d76" containerName="controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488467 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1919941b-c99f-483f-957d-eac3d9c10d76" containerName="controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488482 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerName="route-controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488491 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerName="route-controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488502 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="extract-utilities" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488510 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="extract-utilities" Jan 26 22:43:43 crc kubenswrapper[4793]: E0126 22:43:43.488528 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="extract-utilities" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488535 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="extract-utilities" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488650 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1919941b-c99f-483f-957d-eac3d9c10d76" containerName="controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488667 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" containerName="route-controller-manager" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488676 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca21a12c-d4c3-414b-816c-858756e16147" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.488688 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c0251d-b287-4c00-a392-c43b8164e73d" containerName="registry-server" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.489222 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.497559 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.497930 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.498205 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.498427 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.498791 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.498793 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.502843 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn"] Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.504623 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.691856 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-serving-cert\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.691968 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-proxy-ca-bundles\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.692004 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-config\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.692037 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-client-ca\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.692170 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfsxh\" (UniqueName: \"kubernetes.io/projected/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-kube-api-access-jfsxh\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.767457 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1919941b-c99f-483f-957d-eac3d9c10d76" path="/var/lib/kubelet/pods/1919941b-c99f-483f-957d-eac3d9c10d76/volumes" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.768022 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc577754-9b71-4135-b5ca-3b2255b30af0" path="/var/lib/kubelet/pods/fc577754-9b71-4135-b5ca-3b2255b30af0/volumes" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.793352 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-serving-cert\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.793805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-proxy-ca-bundles\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.793867 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-config\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.793906 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-client-ca\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.793958 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfsxh\" (UniqueName: \"kubernetes.io/projected/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-kube-api-access-jfsxh\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.795309 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-client-ca\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.795527 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-proxy-ca-bundles\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.796426 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-config\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.797914 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-serving-cert\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:43 crc kubenswrapper[4793]: I0126 22:43:43.810866 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfsxh\" (UniqueName: \"kubernetes.io/projected/fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978-kube-api-access-jfsxh\") pod \"controller-manager-5968bd7cd7-6gqxn\" (UID: \"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978\") " pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.109404 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.489799 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m"] Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.491121 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.494032 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.494391 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.494919 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.495154 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.495370 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.504576 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m"] Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.506528 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.564142 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn"] Jan 26 22:43:44 crc kubenswrapper[4793]: W0126 22:43:44.571035 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe1ed27d_d22f_4a4c_8612_c6ac5dc8a978.slice/crio-220b50f40f02513766d2b5089cf0c9c6c74666c7caf1f42eb6fa61dfa0005a5c WatchSource:0}: Error finding container 220b50f40f02513766d2b5089cf0c9c6c74666c7caf1f42eb6fa61dfa0005a5c: Status 404 returned error can't find the container with id 220b50f40f02513766d2b5089cf0c9c6c74666c7caf1f42eb6fa61dfa0005a5c Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.653567 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8b4977-7970-4687-a34b-d0c385d5672e-serving-cert\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.653655 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-config\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.653692 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghmdc\" (UniqueName: \"kubernetes.io/projected/7d8b4977-7970-4687-a34b-d0c385d5672e-kube-api-access-ghmdc\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.653722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-client-ca\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.754789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-config\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.755583 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghmdc\" (UniqueName: \"kubernetes.io/projected/7d8b4977-7970-4687-a34b-d0c385d5672e-kube-api-access-ghmdc\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.755709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-client-ca\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.756572 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-config\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.756747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d8b4977-7970-4687-a34b-d0c385d5672e-client-ca\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.757013 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8b4977-7970-4687-a34b-d0c385d5672e-serving-cert\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.764509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d8b4977-7970-4687-a34b-d0c385d5672e-serving-cert\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.775768 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghmdc\" (UniqueName: \"kubernetes.io/projected/7d8b4977-7970-4687-a34b-d0c385d5672e-kube-api-access-ghmdc\") pod \"route-controller-manager-787f475857-mdq9m\" (UID: \"7d8b4977-7970-4687-a34b-d0c385d5672e\") " pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:44 crc kubenswrapper[4793]: I0126 22:43:44.817539 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.112337 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" event={"ID":"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978","Type":"ContainerStarted","Data":"b4ddfc357b8d576565172d4c070250813cde188dc3e0497fc2a13cea2d107687"} Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.112379 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" event={"ID":"fe1ed27d-d22f-4a4c-8612-c6ac5dc8a978","Type":"ContainerStarted","Data":"220b50f40f02513766d2b5089cf0c9c6c74666c7caf1f42eb6fa61dfa0005a5c"} Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.113535 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.118750 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.162179 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5968bd7cd7-6gqxn" podStartSLOduration=3.162155868 podStartE2EDuration="3.162155868s" podCreationTimestamp="2026-01-26 22:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:45.142395075 +0000 UTC m=+240.131166587" watchObservedRunningTime="2026-01-26 22:43:45.162155868 +0000 UTC m=+240.150927380" Jan 26 22:43:45 crc kubenswrapper[4793]: I0126 22:43:45.246909 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m"] Jan 26 22:43:45 crc kubenswrapper[4793]: W0126 22:43:45.253948 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d8b4977_7970_4687_a34b_d0c385d5672e.slice/crio-f8a9ca87d4052d41cd397e7bd38bd47e635945c3206204f0342e39e4b8cc72c4 WatchSource:0}: Error finding container f8a9ca87d4052d41cd397e7bd38bd47e635945c3206204f0342e39e4b8cc72c4: Status 404 returned error can't find the container with id f8a9ca87d4052d41cd397e7bd38bd47e635945c3206204f0342e39e4b8cc72c4 Jan 26 22:43:46 crc kubenswrapper[4793]: I0126 22:43:46.119710 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" event={"ID":"7d8b4977-7970-4687-a34b-d0c385d5672e","Type":"ContainerStarted","Data":"d5d5c3baf2922669eca8436048c05ad30016076dcea10b963d44d4503d39cfd5"} Jan 26 22:43:46 crc kubenswrapper[4793]: I0126 22:43:46.120212 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" event={"ID":"7d8b4977-7970-4687-a34b-d0c385d5672e","Type":"ContainerStarted","Data":"f8a9ca87d4052d41cd397e7bd38bd47e635945c3206204f0342e39e4b8cc72c4"} Jan 26 22:43:46 crc kubenswrapper[4793]: I0126 22:43:46.120254 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:46 crc kubenswrapper[4793]: I0126 22:43:46.126564 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" Jan 26 22:43:46 crc kubenswrapper[4793]: I0126 22:43:46.160583 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-787f475857-mdq9m" podStartSLOduration=4.160564981 podStartE2EDuration="4.160564981s" podCreationTimestamp="2026-01-26 22:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:43:46.1400046 +0000 UTC m=+241.128776112" watchObservedRunningTime="2026-01-26 22:43:46.160564981 +0000 UTC m=+241.149336493" Jan 26 22:43:50 crc kubenswrapper[4793]: I0126 22:43:50.630219 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" containerID="cri-o://fda2f5b52038cfc33503a309170e245bf286f4bab1007f8d89f6a02e8239cdfe" gracePeriod=15 Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.159964 4793 generic.go:334] "Generic (PLEG): container finished" podID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerID="fda2f5b52038cfc33503a309170e245bf286f4bab1007f8d89f6a02e8239cdfe" exitCode=0 Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.160065 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" event={"ID":"e17125aa-eb99-4bad-a99d-44b86be4f09d","Type":"ContainerDied","Data":"fda2f5b52038cfc33503a309170e245bf286f4bab1007f8d89f6a02e8239cdfe"} Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.347107 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378518 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378597 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378632 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378676 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378703 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378743 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378832 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378769 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.378980 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.379901 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.379927 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380038 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.379014 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjx6p\" (UniqueName: \"kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380269 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380339 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380373 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380396 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380448 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert\") pod \"e17125aa-eb99-4bad-a99d-44b86be4f09d\" (UID: \"e17125aa-eb99-4bad-a99d-44b86be4f09d\") " Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380795 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380824 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380841 4793 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.380853 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.381081 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.386555 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.387305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.390488 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p" (OuterVolumeSpecName: "kube-api-access-kjx6p") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "kube-api-access-kjx6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.397379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.397989 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.398268 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.398343 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.398583 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.400539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e17125aa-eb99-4bad-a99d-44b86be4f09d" (UID: "e17125aa-eb99-4bad-a99d-44b86be4f09d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.481969 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482005 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482014 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482023 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482035 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482046 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjx6p\" (UniqueName: \"kubernetes.io/projected/e17125aa-eb99-4bad-a99d-44b86be4f09d-kube-api-access-kjx6p\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482057 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482066 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482076 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:52 crc kubenswrapper[4793]: I0126 22:43:52.482084 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e17125aa-eb99-4bad-a99d-44b86be4f09d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.168777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" event={"ID":"e17125aa-eb99-4bad-a99d-44b86be4f09d","Type":"ContainerDied","Data":"52b42dc9fc3fc776aa8343ac648fa9a1880a7934eae4c7a756dfbfced2a3bb5c"} Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.168851 4793 scope.go:117] "RemoveContainer" containerID="fda2f5b52038cfc33503a309170e245bf286f4bab1007f8d89f6a02e8239cdfe" Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.168868 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lvnpc" Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.213708 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.219977 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lvnpc"] Jan 26 22:43:53 crc kubenswrapper[4793]: I0126 22:43:53.772736 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" path="/var/lib/kubelet/pods/e17125aa-eb99-4bad-a99d-44b86be4f09d/volumes" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.010165 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.010961 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.010987 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.011183 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17125aa-eb99-4bad-a99d-44b86be4f09d" containerName="oauth-openshift" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.011637 4793 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.011772 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012081 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b" gracePeriod=15 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012296 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d" gracePeriod=15 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012403 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088" gracePeriod=15 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012478 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2" gracePeriod=15 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012017 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955" gracePeriod=15 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.012775 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013118 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013152 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013170 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013185 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013232 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013246 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013265 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013277 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013340 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013354 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013378 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013390 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.013420 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013432 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013605 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013627 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013641 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013658 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013677 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.013694 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.019217 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.027465 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.027578 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.027796 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.027900 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.027930 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.062775 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.132985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133151 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133272 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133330 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133381 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133418 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133616 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133654 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133656 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133674 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.133707 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.190583 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.192086 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.192877 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2" exitCode=2 Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236100 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236169 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236244 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236262 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.236352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: I0126 22:43:56.353399 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.375724 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e694222bd365f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,LastTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 22:43:56 crc kubenswrapper[4793]: E0126 22:43:56.769302 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e694222bd365f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,LastTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.200806 4793 generic.go:334] "Generic (PLEG): container finished" podID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" containerID="ae86aae5bad8ac03faa82ea1252a93b8958791a84a7892d11dbd7cc6c74b423c" exitCode=0 Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.200961 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dbe90ef5-9518-4ec1-b9b6-479bf9732ded","Type":"ContainerDied","Data":"ae86aae5bad8ac03faa82ea1252a93b8958791a84a7892d11dbd7cc6c74b423c"} Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.202516 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.203109 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.203466 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c03cd50aac0b902e0ae894f30cc6ed356e9cc34124740561b34340f71813b806"} Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.203507 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"10aca80144b51045f9aedf38a88ada7facd81ee3b5ec9db74afc7ce3435d0128"} Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.204365 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.204867 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.205916 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.207382 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.207904 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d" exitCode=0 Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.207923 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b" exitCode=0 Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.207930 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088" exitCode=0 Jan 26 22:43:57 crc kubenswrapper[4793]: I0126 22:43:57.207959 4793 scope.go:117] "RemoveContainer" containerID="2773eb839948f61e62e0c67dc058b04211be336f95ecb1850a366d6224695ca7" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.226050 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.639377 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.641120 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.641628 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670380 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock\") pod \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670436 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir\") pod \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670524 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock" (OuterVolumeSpecName: "var-lock") pod "dbe90ef5-9518-4ec1-b9b6-479bf9732ded" (UID: "dbe90ef5-9518-4ec1-b9b6-479bf9732ded"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670683 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access\") pod \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\" (UID: \"dbe90ef5-9518-4ec1-b9b6-479bf9732ded\") " Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670710 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dbe90ef5-9518-4ec1-b9b6-479bf9732ded" (UID: "dbe90ef5-9518-4ec1-b9b6-479bf9732ded"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670901 4793 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.670921 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.676393 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dbe90ef5-9518-4ec1-b9b6-479bf9732ded" (UID: "dbe90ef5-9518-4ec1-b9b6-479bf9732ded"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:43:58 crc kubenswrapper[4793]: I0126 22:43:58.772344 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe90ef5-9518-4ec1-b9b6-479bf9732ded-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.148362 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.149578 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.150585 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.151183 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.151840 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.175726 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.175818 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.175855 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.175913 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.175940 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.176063 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.176500 4793 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.176519 4793 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.176529 4793 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.236923 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.238327 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955" exitCode=0 Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.238437 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.239475 4793 scope.go:117] "RemoveContainer" containerID="9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.241619 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dbe90ef5-9518-4ec1-b9b6-479bf9732ded","Type":"ContainerDied","Data":"86d64a9898e0bc215d83fa5f31fbdb9e7c7bf15338d21b31a5a1c977a3cf778e"} Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.241650 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d64a9898e0bc215d83fa5f31fbdb9e7c7bf15338d21b31a5a1c977a3cf778e" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.241942 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.255194 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.255520 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.255748 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.263108 4793 scope.go:117] "RemoveContainer" containerID="a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.270009 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.270494 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.270794 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.280309 4793 scope.go:117] "RemoveContainer" containerID="892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.298724 4793 scope.go:117] "RemoveContainer" containerID="9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.316304 4793 scope.go:117] "RemoveContainer" containerID="516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.341616 4793 scope.go:117] "RemoveContainer" containerID="afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.370278 4793 scope.go:117] "RemoveContainer" containerID="9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.370987 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\": container with ID starting with 9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d not found: ID does not exist" containerID="9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.371076 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d"} err="failed to get container status \"9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\": rpc error: code = NotFound desc = could not find container \"9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d\": container with ID starting with 9a67e03db06412f3bc5c6473bb08990d84cf43e4da94741750697d950784198d not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.371147 4793 scope.go:117] "RemoveContainer" containerID="a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.371572 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\": container with ID starting with a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b not found: ID does not exist" containerID="a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.371607 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b"} err="failed to get container status \"a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\": rpc error: code = NotFound desc = could not find container \"a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b\": container with ID starting with a314548829b8556a07a279becadcdd596dc295487eef6af43fff43182177921b not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.371658 4793 scope.go:117] "RemoveContainer" containerID="892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.372208 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\": container with ID starting with 892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088 not found: ID does not exist" containerID="892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.372281 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088"} err="failed to get container status \"892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\": rpc error: code = NotFound desc = could not find container \"892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088\": container with ID starting with 892ee4957b57d500d30547dabb7a2dc8e42385d91ea6ec5ebb6feef0affbd088 not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.372310 4793 scope.go:117] "RemoveContainer" containerID="9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.372632 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\": container with ID starting with 9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2 not found: ID does not exist" containerID="9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.372664 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2"} err="failed to get container status \"9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\": rpc error: code = NotFound desc = could not find container \"9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2\": container with ID starting with 9a1d38db0e27110dec2903b61c638541961474c5563409db424be4c85f47ded2 not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.372683 4793 scope.go:117] "RemoveContainer" containerID="516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.372926 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\": container with ID starting with 516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955 not found: ID does not exist" containerID="516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.373456 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955"} err="failed to get container status \"516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\": rpc error: code = NotFound desc = could not find container \"516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955\": container with ID starting with 516554e557c1b3360f248395e2e22fcc04b0c8bbb2719562f235cec0976d4955 not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.373482 4793 scope.go:117] "RemoveContainer" containerID="afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb" Jan 26 22:43:59 crc kubenswrapper[4793]: E0126 22:43:59.373777 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\": container with ID starting with afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb not found: ID does not exist" containerID="afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.373844 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb"} err="failed to get container status \"afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\": rpc error: code = NotFound desc = could not find container \"afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb\": container with ID starting with afa6e2c022692827e1f7f28a8256377a8e5a64bc7106ad35ca759893de306fdb not found: ID does not exist" Jan 26 22:43:59 crc kubenswrapper[4793]: I0126 22:43:59.771971 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 22:44:01 crc kubenswrapper[4793]: E0126 22:44:01.803662 4793 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" volumeName="registry-storage" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.634921 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.635729 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.636016 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.636290 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.636533 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: I0126 22:44:05.636567 4793 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.636823 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="200ms" Jan 26 22:44:05 crc kubenswrapper[4793]: I0126 22:44:05.764696 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: I0126 22:44:05.764971 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:05 crc kubenswrapper[4793]: E0126 22:44:05.838432 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="400ms" Jan 26 22:44:06 crc kubenswrapper[4793]: E0126 22:44:06.240002 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="800ms" Jan 26 22:44:06 crc kubenswrapper[4793]: E0126 22:44:06.771077 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e694222bd365f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,LastTimestamp:2026-01-26 22:43:56.374767199 +0000 UTC m=+251.363538711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 22:44:07 crc kubenswrapper[4793]: E0126 22:44:07.041021 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="1.6s" Jan 26 22:44:08 crc kubenswrapper[4793]: E0126 22:44:08.641773 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="3.2s" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.760798 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.761818 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.762409 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.776115 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.776176 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:08 crc kubenswrapper[4793]: E0126 22:44:08.776793 4793 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:08 crc kubenswrapper[4793]: I0126 22:44:08.777656 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.306264 4793 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1a36f4e00ea348ef3f075af47a961cbd3441e01399ebdb64a044398c53971652" exitCode=0 Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.306322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1a36f4e00ea348ef3f075af47a961cbd3441e01399ebdb64a044398c53971652"} Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.306414 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d91fec7ba1c2e67d7feda711d0511808d8d6e717319e7d8be72eb49e1a117b69"} Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.306805 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.306826 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.307363 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:09 crc kubenswrapper[4793]: E0126 22:44:09.307392 4793 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.307806 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.311285 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.311331 4793 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3" exitCode=1 Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.311357 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3"} Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.311632 4793 scope.go:117] "RemoveContainer" containerID="e2dab0444a889fda28c58653de8c5526c27e9217d212a8b29276e9dadf939ce3" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.312230 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.312766 4793 status_manager.go:851] "Failed to get status for pod" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.313227 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 26 22:44:09 crc kubenswrapper[4793]: I0126 22:44:09.382615 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:44:10 crc kubenswrapper[4793]: I0126 22:44:10.323174 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 22:44:10 crc kubenswrapper[4793]: I0126 22:44:10.323404 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3b6437b716cf52361dcf2bc1f013b54e24cd4d66c149476d7391214aafbc20ea"} Jan 26 22:44:10 crc kubenswrapper[4793]: I0126 22:44:10.331082 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f33468285fa6093adfd00418dbeaf7bbe1e86a6ea7d2251e2026241bcefbfbdd"} Jan 26 22:44:11 crc kubenswrapper[4793]: I0126 22:44:11.338412 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e2d59cda4e5550c2b60a74f78a216e8147d7f962b0d8028a615f3e0fa1ed0c1d"} Jan 26 22:44:11 crc kubenswrapper[4793]: I0126 22:44:11.338810 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"567d460440a2c250abe366cd0efe6c27a9e00d8d58f03ac6631e7d15ab391bf2"} Jan 26 22:44:12 crc kubenswrapper[4793]: I0126 22:44:12.347077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0e5fe672a6324af5f20b2de3228332cd1d7e464ebb01e0507f988da14bf9c9d9"} Jan 26 22:44:12 crc kubenswrapper[4793]: I0126 22:44:12.347129 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a02effc4db3b7794d055eb97eaff2c5c6dbcc6614d35445866f0579feea3fdc8"} Jan 26 22:44:12 crc kubenswrapper[4793]: I0126 22:44:12.347632 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:12 crc kubenswrapper[4793]: I0126 22:44:12.347661 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:13 crc kubenswrapper[4793]: I0126 22:44:13.777928 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:13 crc kubenswrapper[4793]: I0126 22:44:13.778480 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:13 crc kubenswrapper[4793]: I0126 22:44:13.787743 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:15 crc kubenswrapper[4793]: I0126 22:44:15.653901 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:44:15 crc kubenswrapper[4793]: I0126 22:44:15.658069 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:44:16 crc kubenswrapper[4793]: I0126 22:44:16.373364 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.358601 4793 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.383477 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.383692 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.383736 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.388053 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:17 crc kubenswrapper[4793]: I0126 22:44:17.390582 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aeea3e8c-33a1-40b4-9b77-f03da96a2f18" Jan 26 22:44:18 crc kubenswrapper[4793]: I0126 22:44:18.387402 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:18 crc kubenswrapper[4793]: I0126 22:44:18.388384 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:19 crc kubenswrapper[4793]: I0126 22:44:19.390004 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 22:44:25 crc kubenswrapper[4793]: I0126 22:44:25.783311 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 22:44:25 crc kubenswrapper[4793]: I0126 22:44:25.794732 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aeea3e8c-33a1-40b4-9b77-f03da96a2f18" Jan 26 22:44:27 crc kubenswrapper[4793]: I0126 22:44:27.623925 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.009558 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.053308 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.263994 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.335067 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.498814 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.616986 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.713902 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.981639 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 22:44:28 crc kubenswrapper[4793]: I0126 22:44:28.983528 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.163103 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.377223 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.387882 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.391545 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.414304 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.419905 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.588258 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.672176 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.730733 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.761629 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.782718 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.813720 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.978003 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 22:44:29 crc kubenswrapper[4793]: I0126 22:44:29.994443 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.027027 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.039734 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.047992 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.191406 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.212378 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.220662 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.347536 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.658366 4793 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.837162 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.916290 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 22:44:30 crc kubenswrapper[4793]: I0126 22:44:30.973800 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.126030 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.163435 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.172168 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.186905 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.194783 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.199081 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.217001 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.432985 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.760328 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.838263 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 22:44:31 crc kubenswrapper[4793]: I0126 22:44:31.847645 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.005083 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.047841 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.063689 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.084410 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.112760 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.374316 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.466416 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.543749 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.622365 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.655742 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.716635 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.743966 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.867025 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 22:44:32 crc kubenswrapper[4793]: I0126 22:44:32.868639 4793 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.003029 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.006116 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.015999 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.017555 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.078693 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.113007 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.166787 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.195478 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.207335 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.251591 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.304813 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.496291 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.516407 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.545734 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.581819 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.615250 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.634751 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.689568 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.706925 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.730954 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.828738 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.868218 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 22:44:33 crc kubenswrapper[4793]: I0126 22:44:33.992089 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.024647 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.158824 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.243833 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.268756 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.309138 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.394711 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.464950 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.486340 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.573689 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.577974 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.625349 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.656864 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.729891 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.785260 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.799846 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.873024 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.884091 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 22:44:34 crc kubenswrapper[4793]: I0126 22:44:34.953951 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.095909 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.194264 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.334720 4793 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.349272 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.351337 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.469201 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.620244 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.739943 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.809861 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 22:44:35 crc kubenswrapper[4793]: I0126 22:44:35.852732 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.007996 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.066364 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.096634 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.115934 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.140052 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.143672 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.478639 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.637690 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.873302 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.889700 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 22:44:36 crc kubenswrapper[4793]: I0126 22:44:36.896608 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.003459 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.062061 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.161388 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.169718 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.237047 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.245591 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.280118 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.312083 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.337922 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.408478 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.414223 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.447669 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.475231 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.477580 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.509859 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.532656 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.571183 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.658135 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.715818 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.740040 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.838332 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.843212 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.928478 4793 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.934654 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.934622299 podStartE2EDuration="41.934622299s" podCreationTimestamp="2026-01-26 22:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:44:16.279184696 +0000 UTC m=+271.267956258" watchObservedRunningTime="2026-01-26 22:44:37.934622299 +0000 UTC m=+292.923393841" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938003 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938080 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-84fc7bd96f-65hxg"] Jan 26 22:44:37 crc kubenswrapper[4793]: E0126 22:44:37.938534 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" containerName="installer" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938568 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" containerName="installer" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938764 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938799 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe90ef5-9518-4ec1-b9b6-479bf9732ded" containerName="installer" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.938808 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="65af0fd8-004c-4fdc-97f5-05d8bf6c8127" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.939639 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.946307 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.949343 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.950261 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.950470 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.950693 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.950723 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.950916 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.951086 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.951763 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.952159 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.952597 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.953544 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.954761 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.960776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.968316 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.972024 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995573 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995641 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-audit-policies\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995678 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-login\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995710 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjzq\" (UniqueName: \"kubernetes.io/projected/24e166d9-621b-468f-baca-a49eeafbbb16-kube-api-access-hqjzq\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995738 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-session\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-error\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995941 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-service-ca\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.995966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.996030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.996062 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-router-certs\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.996100 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:37 crc kubenswrapper[4793]: I0126 22:44:37.996142 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24e166d9-621b-468f-baca-a49eeafbbb16-audit-dir\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.009500 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.009444619 podStartE2EDuration="21.009444619s" podCreationTimestamp="2026-01-26 22:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:44:38.008285033 +0000 UTC m=+292.997056575" watchObservedRunningTime="2026-01-26 22:44:38.009444619 +0000 UTC m=+292.998216141" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.082553 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.090595 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.097844 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.097895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-router-certs\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.097953 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.097998 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24e166d9-621b-468f-baca-a49eeafbbb16-audit-dir\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098058 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-audit-policies\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098082 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-login\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098111 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjzq\" (UniqueName: \"kubernetes.io/projected/24e166d9-621b-468f-baca-a49eeafbbb16-kube-api-access-hqjzq\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-session\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098212 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24e166d9-621b-468f-baca-a49eeafbbb16-audit-dir\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098215 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098326 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-error\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098359 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.098388 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-service-ca\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.100049 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.100107 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-service-ca\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.100153 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-audit-policies\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.100306 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.104991 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-router-certs\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.105015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-session\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.106051 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.106370 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.106443 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.106450 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-login\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.107421 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.106976 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24e166d9-621b-468f-baca-a49eeafbbb16-v4-0-config-user-template-error\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.122179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjzq\" (UniqueName: \"kubernetes.io/projected/24e166d9-621b-468f-baca-a49eeafbbb16-kube-api-access-hqjzq\") pod \"oauth-openshift-84fc7bd96f-65hxg\" (UID: \"24e166d9-621b-468f-baca-a49eeafbbb16\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.165952 4793 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.179218 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.223030 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.230440 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.268963 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.321862 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.332456 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.366347 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.385930 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.430544 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.516589 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.646131 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.704532 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.725619 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-84fc7bd96f-65hxg"] Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.733154 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.828244 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.846406 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.872402 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 22:44:38 crc kubenswrapper[4793]: I0126 22:44:38.959558 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.011551 4793 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.012450 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c03cd50aac0b902e0ae894f30cc6ed356e9cc34124740561b34340f71813b806" gracePeriod=5 Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.081854 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.130429 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.165362 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.261567 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.345693 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.457675 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.459382 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.488366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.529953 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" event={"ID":"24e166d9-621b-468f-baca-a49eeafbbb16","Type":"ContainerStarted","Data":"4b3eabc9dec90ebf29859930dee2e754c8fcc01183a47c58bb3aa26ee201a046"} Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.530010 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" event={"ID":"24e166d9-621b-468f-baca-a49eeafbbb16","Type":"ContainerStarted","Data":"9d8837ae67c6cf3ce04ecaec7fd6c79106e74bf7d645ee7e2c491c88961b5320"} Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.530439 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.591871 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.644866 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.676552 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.692454 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.706681 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.731334 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.733095 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-84fc7bd96f-65hxg" podStartSLOduration=74.733061118 podStartE2EDuration="1m14.733061118s" podCreationTimestamp="2026-01-26 22:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:44:39.574445263 +0000 UTC m=+294.563216775" watchObservedRunningTime="2026-01-26 22:44:39.733061118 +0000 UTC m=+294.721832660" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.757491 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.777317 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.778824 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 22:44:39 crc kubenswrapper[4793]: I0126 22:44:39.875457 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.010912 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.050168 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.114500 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.213843 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.223762 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.261643 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.267842 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.381527 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.381922 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.404947 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.437922 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.568677 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.634103 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.659646 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.767490 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.794880 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.980902 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.987786 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.988230 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 22:44:40 crc kubenswrapper[4793]: I0126 22:44:40.994412 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.059969 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.271685 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.295043 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.379815 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.397925 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.413591 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.440656 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.527987 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.630027 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.666737 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.690562 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 22:44:41 crc kubenswrapper[4793]: I0126 22:44:41.767920 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.014643 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.033140 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.283170 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.367245 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.428366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.428711 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.448328 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.607001 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.660327 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.756070 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 22:44:42 crc kubenswrapper[4793]: I0126 22:44:42.763025 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.037960 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.397336 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.419869 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.606482 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.680068 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 22:44:43 crc kubenswrapper[4793]: I0126 22:44:43.712343 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.051259 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.439572 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.576401 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.576491 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c03cd50aac0b902e0ae894f30cc6ed356e9cc34124740561b34340f71813b806" exitCode=137 Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.672672 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.672764 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.723846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.723991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724035 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724100 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724088 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724232 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724103 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724132 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724264 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724550 4793 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724572 4793 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724591 4793 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.724609 4793 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.737457 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.809123 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.825910 4793 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 22:44:44 crc kubenswrapper[4793]: I0126 22:44:44.908555 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.524783 4793 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.587915 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.588080 4793 scope.go:117] "RemoveContainer" containerID="c03cd50aac0b902e0ae894f30cc6ed356e9cc34124740561b34340f71813b806" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.588239 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.758130 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.774525 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.775077 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.793605 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.793662 4793 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="55b356c5-65be-4a87-9885-4e7c0be41ff8" Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.800835 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 22:44:45 crc kubenswrapper[4793]: I0126 22:44:45.800913 4793 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="55b356c5-65be-4a87-9885-4e7c0be41ff8" Jan 26 22:44:46 crc kubenswrapper[4793]: I0126 22:44:46.506146 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.193898 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24"] Jan 26 22:45:00 crc kubenswrapper[4793]: E0126 22:45:00.194963 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.195003 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.195311 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.196015 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.201630 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.202777 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24"] Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.204488 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.268249 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.268386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.268451 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mv5r\" (UniqueName: \"kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.369620 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.369691 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mv5r\" (UniqueName: \"kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.369793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.371578 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.380596 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.402842 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mv5r\" (UniqueName: \"kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r\") pod \"collect-profiles-29491125-4dp24\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:00 crc kubenswrapper[4793]: I0126 22:45:00.519452 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:01 crc kubenswrapper[4793]: I0126 22:45:00.998691 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24"] Jan 26 22:45:01 crc kubenswrapper[4793]: W0126 22:45:01.014057 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17076622_6d81_4499_9a65_dacfb8b78e44.slice/crio-b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b WatchSource:0}: Error finding container b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b: Status 404 returned error can't find the container with id b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b Jan 26 22:45:01 crc kubenswrapper[4793]: I0126 22:45:01.722261 4793 generic.go:334] "Generic (PLEG): container finished" podID="17076622-6d81-4499-9a65-dacfb8b78e44" containerID="c89a43e25baddf68cc1b158d6f9af509bbe62ff02e3f164e8b8fc64bef65b4f7" exitCode=0 Jan 26 22:45:01 crc kubenswrapper[4793]: I0126 22:45:01.722369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" event={"ID":"17076622-6d81-4499-9a65-dacfb8b78e44","Type":"ContainerDied","Data":"c89a43e25baddf68cc1b158d6f9af509bbe62ff02e3f164e8b8fc64bef65b4f7"} Jan 26 22:45:01 crc kubenswrapper[4793]: I0126 22:45:01.722690 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" event={"ID":"17076622-6d81-4499-9a65-dacfb8b78e44","Type":"ContainerStarted","Data":"b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b"} Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.069114 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.211723 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mv5r\" (UniqueName: \"kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r\") pod \"17076622-6d81-4499-9a65-dacfb8b78e44\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.211893 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume\") pod \"17076622-6d81-4499-9a65-dacfb8b78e44\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.212010 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume\") pod \"17076622-6d81-4499-9a65-dacfb8b78e44\" (UID: \"17076622-6d81-4499-9a65-dacfb8b78e44\") " Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.212813 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume" (OuterVolumeSpecName: "config-volume") pod "17076622-6d81-4499-9a65-dacfb8b78e44" (UID: "17076622-6d81-4499-9a65-dacfb8b78e44"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.222256 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "17076622-6d81-4499-9a65-dacfb8b78e44" (UID: "17076622-6d81-4499-9a65-dacfb8b78e44"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.222318 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r" (OuterVolumeSpecName: "kube-api-access-5mv5r") pod "17076622-6d81-4499-9a65-dacfb8b78e44" (UID: "17076622-6d81-4499-9a65-dacfb8b78e44"). InnerVolumeSpecName "kube-api-access-5mv5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.313665 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17076622-6d81-4499-9a65-dacfb8b78e44-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.313718 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mv5r\" (UniqueName: \"kubernetes.io/projected/17076622-6d81-4499-9a65-dacfb8b78e44-kube-api-access-5mv5r\") on node \"crc\" DevicePath \"\"" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.313729 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17076622-6d81-4499-9a65-dacfb8b78e44-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.744599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" event={"ID":"17076622-6d81-4499-9a65-dacfb8b78e44","Type":"ContainerDied","Data":"b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b"} Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.744671 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8a5e10b04a7f555513a29c82207431eb7780598af17084155cedf9ae4d32c0b" Jan 26 22:45:03 crc kubenswrapper[4793]: I0126 22:45:03.744802 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491125-4dp24" Jan 26 22:45:48 crc kubenswrapper[4793]: I0126 22:45:48.323246 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:45:48 crc kubenswrapper[4793]: I0126 22:45:48.324044 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.253831 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gckjm"] Jan 26 22:46:07 crc kubenswrapper[4793]: E0126 22:46:07.256764 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17076622-6d81-4499-9a65-dacfb8b78e44" containerName="collect-profiles" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.256888 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="17076622-6d81-4499-9a65-dacfb8b78e44" containerName="collect-profiles" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.257111 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="17076622-6d81-4499-9a65-dacfb8b78e44" containerName="collect-profiles" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.257725 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.291670 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gckjm"] Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.420849 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgt9j\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-kube-api-access-wgt9j\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.420941 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-certificates\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421329 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421406 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-trusted-ca\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421570 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-tls\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421620 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-bound-sa-token\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.421653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.444630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.522878 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.523377 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgt9j\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-kube-api-access-wgt9j\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.523607 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-certificates\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.523871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.524103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-trusted-ca\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.524385 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-tls\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.524543 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.524712 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-bound-sa-token\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.525262 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-certificates\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.526347 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-trusted-ca\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.531455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-registry-tls\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.532519 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.551159 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-bound-sa-token\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.551250 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgt9j\" (UniqueName: \"kubernetes.io/projected/f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738-kube-api-access-wgt9j\") pod \"image-registry-66df7c8f76-gckjm\" (UID: \"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738\") " pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.579275 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:07 crc kubenswrapper[4793]: I0126 22:46:07.871861 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gckjm"] Jan 26 22:46:08 crc kubenswrapper[4793]: I0126 22:46:08.179542 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" event={"ID":"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738","Type":"ContainerStarted","Data":"da673fbfbc0303c13128b2cd1ba31130c40bf93da90efde81b3877d9be5a326e"} Jan 26 22:46:08 crc kubenswrapper[4793]: I0126 22:46:08.179946 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" event={"ID":"f90a8a7b-8c9d-4a7f-8287-ddb5b62c4738","Type":"ContainerStarted","Data":"c91bdf7380cab3494b9657f79348325f310e0b983c6dac1ad7ce7de92515f68c"} Jan 26 22:46:09 crc kubenswrapper[4793]: I0126 22:46:09.185785 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:09 crc kubenswrapper[4793]: I0126 22:46:09.208109 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" podStartSLOduration=2.208082331 podStartE2EDuration="2.208082331s" podCreationTimestamp="2026-01-26 22:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:46:09.204112577 +0000 UTC m=+384.192884089" watchObservedRunningTime="2026-01-26 22:46:09.208082331 +0000 UTC m=+384.196853873" Jan 26 22:46:18 crc kubenswrapper[4793]: I0126 22:46:18.322184 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:46:18 crc kubenswrapper[4793]: I0126 22:46:18.322887 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.656593 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.657433 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-78ml2" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="registry-server" containerID="cri-o://bd6863c8a9c27707c6dccdad20d47ee96c33964b5f559bcf79975f2bd5c9666c" gracePeriod=30 Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.673309 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.674514 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rmxpm" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="registry-server" containerID="cri-o://55f45e1681d34762f5dc72f1975ec4d09f5e4114953396254618a8f16aef59e4" gracePeriod=30 Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.686925 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.687167 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" containerID="cri-o://0261db876c5107325c1d28c55b554ed6cd68dfefffa0d5f854c959425c4e8325" gracePeriod=30 Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.700446 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.700743 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s7s8k" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="registry-server" containerID="cri-o://b983778d2b376f12a054d385d4e78fd86cd0c3fd1e88a5f4df510c38cf861052" gracePeriod=30 Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.718350 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvxfj"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.719229 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.723174 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.723587 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s2bff" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="registry-server" containerID="cri-o://a1da675f56afc66f10254ae8acb8acfbb6033d5b14e0b5831b35b1c1d907371f" gracePeriod=30 Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.729057 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvxfj"] Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.836871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.836938 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.837505 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpd6c\" (UniqueName: \"kubernetes.io/projected/768ab49f-bfcd-43e0-829f-226035ded4c8-kube-api-access-kpd6c\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.938548 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpd6c\" (UniqueName: \"kubernetes.io/projected/768ab49f-bfcd-43e0-829f-226035ded4c8-kube-api-access-kpd6c\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.938665 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.938715 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.945868 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.949320 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/768ab49f-bfcd-43e0-829f-226035ded4c8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:21 crc kubenswrapper[4793]: I0126 22:46:21.954524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpd6c\" (UniqueName: \"kubernetes.io/projected/768ab49f-bfcd-43e0-829f-226035ded4c8-kube-api-access-kpd6c\") pod \"marketplace-operator-79b997595-mvxfj\" (UID: \"768ab49f-bfcd-43e0-829f-226035ded4c8\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.040807 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.294308 4793 generic.go:334] "Generic (PLEG): container finished" podID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerID="bd6863c8a9c27707c6dccdad20d47ee96c33964b5f559bcf79975f2bd5c9666c" exitCode=0 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.294406 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerDied","Data":"bd6863c8a9c27707c6dccdad20d47ee96c33964b5f559bcf79975f2bd5c9666c"} Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.296229 4793 generic.go:334] "Generic (PLEG): container finished" podID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerID="0261db876c5107325c1d28c55b554ed6cd68dfefffa0d5f854c959425c4e8325" exitCode=0 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.296288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" event={"ID":"f7779d32-d1d6-4e24-b59e-04461b1021c3","Type":"ContainerDied","Data":"0261db876c5107325c1d28c55b554ed6cd68dfefffa0d5f854c959425c4e8325"} Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.298134 4793 generic.go:334] "Generic (PLEG): container finished" podID="2ece571b-df1f-4605-8127-b71fb41d2189" containerID="b983778d2b376f12a054d385d4e78fd86cd0c3fd1e88a5f4df510c38cf861052" exitCode=0 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.298227 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerDied","Data":"b983778d2b376f12a054d385d4e78fd86cd0c3fd1e88a5f4df510c38cf861052"} Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.300537 4793 generic.go:334] "Generic (PLEG): container finished" podID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerID="55f45e1681d34762f5dc72f1975ec4d09f5e4114953396254618a8f16aef59e4" exitCode=0 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.300598 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerDied","Data":"55f45e1681d34762f5dc72f1975ec4d09f5e4114953396254618a8f16aef59e4"} Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.303084 4793 generic.go:334] "Generic (PLEG): container finished" podID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerID="a1da675f56afc66f10254ae8acb8acfbb6033d5b14e0b5831b35b1c1d907371f" exitCode=0 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.303105 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerDied","Data":"a1da675f56afc66f10254ae8acb8acfbb6033d5b14e0b5831b35b1c1d907371f"} Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.446494 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvxfj"] Jan 26 22:46:22 crc kubenswrapper[4793]: W0126 22:46:22.457689 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod768ab49f_bfcd_43e0_829f_226035ded4c8.slice/crio-d92285567c08296070d3b4dde0b57c53acb3e65859c47cf36cbe340324077b93 WatchSource:0}: Error finding container d92285567c08296070d3b4dde0b57c53acb3e65859c47cf36cbe340324077b93: Status 404 returned error can't find the container with id d92285567c08296070d3b4dde0b57c53acb3e65859c47cf36cbe340324077b93 Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.494612 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.549928 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content\") pod \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.549986 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities\") pod \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.550097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tf6c\" (UniqueName: \"kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c\") pod \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\" (UID: \"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.551215 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities" (OuterVolumeSpecName: "utilities") pod "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" (UID: "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.562838 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c" (OuterVolumeSpecName: "kube-api-access-7tf6c") pod "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" (UID: "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125"). InnerVolumeSpecName "kube-api-access-7tf6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.607139 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.623931 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" (UID: "ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.646161 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities\") pod \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650563 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities\") pod \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650621 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lhnk\" (UniqueName: \"kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk\") pod \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650659 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm6z5\" (UniqueName: \"kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5\") pod \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650737 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content\") pod \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\" (UID: \"1ea0b73a-1820-4411-bbbf-acd3d22899e0\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650772 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content\") pod \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\" (UID: \"b0a722d2-056a-4bf2-a33c-719ee8aba7a8\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650964 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tf6c\" (UniqueName: \"kubernetes.io/projected/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-kube-api-access-7tf6c\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650981 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.650994 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.651724 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities" (OuterVolumeSpecName: "utilities") pod "1ea0b73a-1820-4411-bbbf-acd3d22899e0" (UID: "1ea0b73a-1820-4411-bbbf-acd3d22899e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.652760 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities" (OuterVolumeSpecName: "utilities") pod "b0a722d2-056a-4bf2-a33c-719ee8aba7a8" (UID: "b0a722d2-056a-4bf2-a33c-719ee8aba7a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.664440 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk" (OuterVolumeSpecName: "kube-api-access-5lhnk") pod "1ea0b73a-1820-4411-bbbf-acd3d22899e0" (UID: "1ea0b73a-1820-4411-bbbf-acd3d22899e0"). InnerVolumeSpecName "kube-api-access-5lhnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.664471 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5" (OuterVolumeSpecName: "kube-api-access-zm6z5") pod "b0a722d2-056a-4bf2-a33c-719ee8aba7a8" (UID: "b0a722d2-056a-4bf2-a33c-719ee8aba7a8"). InnerVolumeSpecName "kube-api-access-zm6z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.669086 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.669885 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.717789 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ea0b73a-1820-4411-bbbf-acd3d22899e0" (UID: "1ea0b73a-1820-4411-bbbf-acd3d22899e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.752582 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.752640 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.752653 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea0b73a-1820-4411-bbbf-acd3d22899e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.752670 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lhnk\" (UniqueName: \"kubernetes.io/projected/1ea0b73a-1820-4411-bbbf-acd3d22899e0-kube-api-access-5lhnk\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.752686 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm6z5\" (UniqueName: \"kubernetes.io/projected/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-kube-api-access-zm6z5\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.813093 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0a722d2-056a-4bf2-a33c-719ee8aba7a8" (UID: "b0a722d2-056a-4bf2-a33c-719ee8aba7a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853469 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics\") pod \"f7779d32-d1d6-4e24-b59e-04461b1021c3\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853535 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqv7k\" (UniqueName: \"kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k\") pod \"f7779d32-d1d6-4e24-b59e-04461b1021c3\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853574 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwhp7\" (UniqueName: \"kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7\") pod \"2ece571b-df1f-4605-8127-b71fb41d2189\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca\") pod \"f7779d32-d1d6-4e24-b59e-04461b1021c3\" (UID: \"f7779d32-d1d6-4e24-b59e-04461b1021c3\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853701 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content\") pod \"2ece571b-df1f-4605-8127-b71fb41d2189\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853740 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities\") pod \"2ece571b-df1f-4605-8127-b71fb41d2189\" (UID: \"2ece571b-df1f-4605-8127-b71fb41d2189\") " Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.853959 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a722d2-056a-4bf2-a33c-719ee8aba7a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.854587 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f7779d32-d1d6-4e24-b59e-04461b1021c3" (UID: "f7779d32-d1d6-4e24-b59e-04461b1021c3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.854839 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities" (OuterVolumeSpecName: "utilities") pod "2ece571b-df1f-4605-8127-b71fb41d2189" (UID: "2ece571b-df1f-4605-8127-b71fb41d2189"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.858321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7" (OuterVolumeSpecName: "kube-api-access-rwhp7") pod "2ece571b-df1f-4605-8127-b71fb41d2189" (UID: "2ece571b-df1f-4605-8127-b71fb41d2189"). InnerVolumeSpecName "kube-api-access-rwhp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.858635 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f7779d32-d1d6-4e24-b59e-04461b1021c3" (UID: "f7779d32-d1d6-4e24-b59e-04461b1021c3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.858785 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k" (OuterVolumeSpecName: "kube-api-access-kqv7k") pod "f7779d32-d1d6-4e24-b59e-04461b1021c3" (UID: "f7779d32-d1d6-4e24-b59e-04461b1021c3"). InnerVolumeSpecName "kube-api-access-kqv7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.879763 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ece571b-df1f-4605-8127-b71fb41d2189" (UID: "2ece571b-df1f-4605-8127-b71fb41d2189"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.954930 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.955331 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqv7k\" (UniqueName: \"kubernetes.io/projected/f7779d32-d1d6-4e24-b59e-04461b1021c3-kube-api-access-kqv7k\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.955420 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwhp7\" (UniqueName: \"kubernetes.io/projected/2ece571b-df1f-4605-8127-b71fb41d2189-kube-api-access-rwhp7\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.955498 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7779d32-d1d6-4e24-b59e-04461b1021c3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.955574 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:22 crc kubenswrapper[4793]: I0126 22:46:22.955648 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ece571b-df1f-4605-8127-b71fb41d2189-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.309834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxpm" event={"ID":"1ea0b73a-1820-4411-bbbf-acd3d22899e0","Type":"ContainerDied","Data":"b6c39e29a72b59338ca03d2810962928be44b01ddeee62784d5e0ce96eb5572d"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.309913 4793 scope.go:117] "RemoveContainer" containerID="55f45e1681d34762f5dc72f1975ec4d09f5e4114953396254618a8f16aef59e4" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.309862 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxpm" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.312250 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2bff" event={"ID":"b0a722d2-056a-4bf2-a33c-719ee8aba7a8","Type":"ContainerDied","Data":"ab28d055e5e6d116639cc20c8075ccee993ec8ea1d3f84c9e75e2020b4dbfa88"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.312269 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2bff" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.315135 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78ml2" event={"ID":"ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125","Type":"ContainerDied","Data":"de4523d60ec6d9e2d26e56021a3eb00ace652444b6905d5221950bbb9e65363d"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.315332 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78ml2" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.317434 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" event={"ID":"768ab49f-bfcd-43e0-829f-226035ded4c8","Type":"ContainerStarted","Data":"201f8e5f9df01303aefdf163e8149434446d4b73605da6880d1c578cf8971b3d"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.317683 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.317699 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" event={"ID":"768ab49f-bfcd-43e0-829f-226035ded4c8","Type":"ContainerStarted","Data":"d92285567c08296070d3b4dde0b57c53acb3e65859c47cf36cbe340324077b93"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.318996 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" event={"ID":"f7779d32-d1d6-4e24-b59e-04461b1021c3","Type":"ContainerDied","Data":"7bb09463affe65b8c8341ff8758a1535e4d3942375b31382f2863df221ac542b"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.319067 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rzprv" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.321018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7s8k" event={"ID":"2ece571b-df1f-4605-8127-b71fb41d2189","Type":"ContainerDied","Data":"ea2f41ae1538a21ff971c4ee16cf82bf0b9ff3fb1449256723e833b410cd7678"} Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.321089 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7s8k" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.329536 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.338338 4793 scope.go:117] "RemoveContainer" containerID="f47964ab77003c1812c68563a6ab332512b80c93bca84785abf6954776142bb0" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.344627 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mvxfj" podStartSLOduration=2.344603446 podStartE2EDuration="2.344603446s" podCreationTimestamp="2026-01-26 22:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:46:23.340329952 +0000 UTC m=+398.329101474" watchObservedRunningTime="2026-01-26 22:46:23.344603446 +0000 UTC m=+398.333374958" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.369930 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.375375 4793 scope.go:117] "RemoveContainer" containerID="c998dd75c98b69795ee5fb5dadc61e23eb1a709df4bc6c6589c9107c4d8626e1" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.375418 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rmxpm"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.407476 4793 scope.go:117] "RemoveContainer" containerID="a1da675f56afc66f10254ae8acb8acfbb6033d5b14e0b5831b35b1c1d907371f" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.412321 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.419608 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7s8k"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.430436 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.436962 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-78ml2"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.439807 4793 scope.go:117] "RemoveContainer" containerID="259312878a124238b69d304541bda4609b01b2ae8eda7e0b801b983fe6b8048d" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.448013 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.450316 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rzprv"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.455648 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.456006 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s2bff"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.460562 4793 scope.go:117] "RemoveContainer" containerID="43bfb5939bd86706aa081141535e22be807fc9c0577dcd35f30a1721eb3de604" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.484300 4793 scope.go:117] "RemoveContainer" containerID="bd6863c8a9c27707c6dccdad20d47ee96c33964b5f559bcf79975f2bd5c9666c" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.501237 4793 scope.go:117] "RemoveContainer" containerID="a1c6048ea8865fb8c9c51c5203790e66b0db46dbc02e36a248d91ae4039fe139" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.518485 4793 scope.go:117] "RemoveContainer" containerID="53ade5b98f6c11809a6f4eec9f04e9ed3dc37b0586623e2c12f5e703dca23ac1" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.530045 4793 scope.go:117] "RemoveContainer" containerID="0261db876c5107325c1d28c55b554ed6cd68dfefffa0d5f854c959425c4e8325" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.541285 4793 scope.go:117] "RemoveContainer" containerID="b983778d2b376f12a054d385d4e78fd86cd0c3fd1e88a5f4df510c38cf861052" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.561770 4793 scope.go:117] "RemoveContainer" containerID="9deb6cc715d13e34300108f710dc2b32d0af0395f968c159a45df8f3c1fa5d0c" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.574504 4793 scope.go:117] "RemoveContainer" containerID="df91a57438b106203c49f3ea641745b85b7989ad66fa5fea056e5bed77508967" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.767488 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" path="/var/lib/kubelet/pods/1ea0b73a-1820-4411-bbbf-acd3d22899e0/volumes" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.768404 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" path="/var/lib/kubelet/pods/2ece571b-df1f-4605-8127-b71fb41d2189/volumes" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.769241 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" path="/var/lib/kubelet/pods/ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125/volumes" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.770806 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" path="/var/lib/kubelet/pods/b0a722d2-056a-4bf2-a33c-719ee8aba7a8/volumes" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.771825 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" path="/var/lib/kubelet/pods/f7779d32-d1d6-4e24-b59e-04461b1021c3/volumes" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.890931 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dbctj"] Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892668 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892694 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892734 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892742 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892751 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892762 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892775 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892811 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892826 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892836 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892847 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892856 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892890 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892899 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.892911 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.892990 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.893059 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.893099 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.893142 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.893152 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="extract-utilities" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.893163 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.893173 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.893729 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.893743 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: E0126 22:46:23.893882 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.893895 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="extract-content" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.894841 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7779d32-d1d6-4e24-b59e-04461b1021c3" containerName="marketplace-operator" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.895245 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a722d2-056a-4bf2-a33c-719ee8aba7a8" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.895258 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ece571b-df1f-4605-8127-b71fb41d2189" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.895272 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea0b73a-1820-4411-bbbf-acd3d22899e0" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.895321 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad403bd1-f9e6-4b89-8a40-fe2e7b8d2125" containerName="registry-server" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.900263 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbctj"] Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.900819 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:23 crc kubenswrapper[4793]: I0126 22:46:23.917009 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.069125 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26c54\" (UniqueName: \"kubernetes.io/projected/84e4ec14-74a8-48d6-a90c-7cf20d345d74-kube-api-access-26c54\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.069221 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-utilities\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.069251 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-catalog-content\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.078438 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hr5wt"] Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.079711 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.084035 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.091055 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hr5wt"] Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.170493 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-utilities\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.170553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-catalog-content\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.170605 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26c54\" (UniqueName: \"kubernetes.io/projected/84e4ec14-74a8-48d6-a90c-7cf20d345d74-kube-api-access-26c54\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.171714 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-utilities\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.171713 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e4ec14-74a8-48d6-a90c-7cf20d345d74-catalog-content\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.189971 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26c54\" (UniqueName: \"kubernetes.io/projected/84e4ec14-74a8-48d6-a90c-7cf20d345d74-kube-api-access-26c54\") pod \"redhat-marketplace-dbctj\" (UID: \"84e4ec14-74a8-48d6-a90c-7cf20d345d74\") " pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.226503 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.271973 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-catalog-content\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.272029 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-utilities\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.272076 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7gf\" (UniqueName: \"kubernetes.io/projected/9704070a-bf0b-4232-8e47-933e5f2ed889-kube-api-access-6r7gf\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.374333 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-utilities\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.374452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r7gf\" (UniqueName: \"kubernetes.io/projected/9704070a-bf0b-4232-8e47-933e5f2ed889-kube-api-access-6r7gf\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.374526 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-catalog-content\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.375095 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-utilities\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.375114 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9704070a-bf0b-4232-8e47-933e5f2ed889-catalog-content\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.396626 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r7gf\" (UniqueName: \"kubernetes.io/projected/9704070a-bf0b-4232-8e47-933e5f2ed889-kube-api-access-6r7gf\") pod \"redhat-operators-hr5wt\" (UID: \"9704070a-bf0b-4232-8e47-933e5f2ed889\") " pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.418942 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.451118 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbctj"] Jan 26 22:46:24 crc kubenswrapper[4793]: W0126 22:46:24.457705 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e4ec14_74a8_48d6_a90c_7cf20d345d74.slice/crio-a50bcab4b3450e4b8e9bbfeb90c0a6a9402c8739579aef6301dc3e99f6ceb52f WatchSource:0}: Error finding container a50bcab4b3450e4b8e9bbfeb90c0a6a9402c8739579aef6301dc3e99f6ceb52f: Status 404 returned error can't find the container with id a50bcab4b3450e4b8e9bbfeb90c0a6a9402c8739579aef6301dc3e99f6ceb52f Jan 26 22:46:24 crc kubenswrapper[4793]: I0126 22:46:24.626409 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hr5wt"] Jan 26 22:46:24 crc kubenswrapper[4793]: W0126 22:46:24.632490 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9704070a_bf0b_4232_8e47_933e5f2ed889.slice/crio-498b4cd0435c7f3c7e8283bcc82f6992749921d6587f09f4cfa9c1f2643c637e WatchSource:0}: Error finding container 498b4cd0435c7f3c7e8283bcc82f6992749921d6587f09f4cfa9c1f2643c637e: Status 404 returned error can't find the container with id 498b4cd0435c7f3c7e8283bcc82f6992749921d6587f09f4cfa9c1f2643c637e Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.353045 4793 generic.go:334] "Generic (PLEG): container finished" podID="9704070a-bf0b-4232-8e47-933e5f2ed889" containerID="aebb2d02c3863da955a1ff40c1abe3cd4babf3e77ec1747eaaccfab32e015400" exitCode=0 Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.353163 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hr5wt" event={"ID":"9704070a-bf0b-4232-8e47-933e5f2ed889","Type":"ContainerDied","Data":"aebb2d02c3863da955a1ff40c1abe3cd4babf3e77ec1747eaaccfab32e015400"} Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.353272 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hr5wt" event={"ID":"9704070a-bf0b-4232-8e47-933e5f2ed889","Type":"ContainerStarted","Data":"498b4cd0435c7f3c7e8283bcc82f6992749921d6587f09f4cfa9c1f2643c637e"} Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.355170 4793 generic.go:334] "Generic (PLEG): container finished" podID="84e4ec14-74a8-48d6-a90c-7cf20d345d74" containerID="a1bc8a72fd3d03823afbc16877e4c2346324918eeaa8a4b9bd9dbc3951bec425" exitCode=0 Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.355369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbctj" event={"ID":"84e4ec14-74a8-48d6-a90c-7cf20d345d74","Type":"ContainerDied","Data":"a1bc8a72fd3d03823afbc16877e4c2346324918eeaa8a4b9bd9dbc3951bec425"} Jan 26 22:46:25 crc kubenswrapper[4793]: I0126 22:46:25.355850 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbctj" event={"ID":"84e4ec14-74a8-48d6-a90c-7cf20d345d74","Type":"ContainerStarted","Data":"a50bcab4b3450e4b8e9bbfeb90c0a6a9402c8739579aef6301dc3e99f6ceb52f"} Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.284784 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6mvm2"] Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.285918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.288502 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.295284 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mvm2"] Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.416203 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-utilities\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.416330 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4cv\" (UniqueName: \"kubernetes.io/projected/1676c9d7-42b7-4566-802d-8a3caeef1380-kube-api-access-wk4cv\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.417797 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-catalog-content\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.489647 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-snkfp"] Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.493298 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.495737 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.503573 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snkfp"] Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.519137 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4cv\" (UniqueName: \"kubernetes.io/projected/1676c9d7-42b7-4566-802d-8a3caeef1380-kube-api-access-wk4cv\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.519224 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-catalog-content\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.519713 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-utilities\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.520163 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-utilities\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.520559 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1676c9d7-42b7-4566-802d-8a3caeef1380-catalog-content\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.545151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4cv\" (UniqueName: \"kubernetes.io/projected/1676c9d7-42b7-4566-802d-8a3caeef1380-kube-api-access-wk4cv\") pod \"community-operators-6mvm2\" (UID: \"1676c9d7-42b7-4566-802d-8a3caeef1380\") " pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.611551 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.621123 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-catalog-content\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.621309 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-utilities\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.621446 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m4vn\" (UniqueName: \"kubernetes.io/projected/af624fd1-30e1-4ee0-8253-89318c76eb99-kube-api-access-9m4vn\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.722340 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-catalog-content\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.722801 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-utilities\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.722845 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m4vn\" (UniqueName: \"kubernetes.io/projected/af624fd1-30e1-4ee0-8253-89318c76eb99-kube-api-access-9m4vn\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.723486 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-catalog-content\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.724160 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af624fd1-30e1-4ee0-8253-89318c76eb99-utilities\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.752156 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m4vn\" (UniqueName: \"kubernetes.io/projected/af624fd1-30e1-4ee0-8253-89318c76eb99-kube-api-access-9m4vn\") pod \"certified-operators-snkfp\" (UID: \"af624fd1-30e1-4ee0-8253-89318c76eb99\") " pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:26 crc kubenswrapper[4793]: I0126 22:46:26.820850 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.022376 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snkfp"] Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.028013 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mvm2"] Jan 26 22:46:27 crc kubenswrapper[4793]: W0126 22:46:27.030344 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1676c9d7_42b7_4566_802d_8a3caeef1380.slice/crio-b4694702b0becefe080af1eabb254da552bcaef2c2598d927507aaed8854c459 WatchSource:0}: Error finding container b4694702b0becefe080af1eabb254da552bcaef2c2598d927507aaed8854c459: Status 404 returned error can't find the container with id b4694702b0becefe080af1eabb254da552bcaef2c2598d927507aaed8854c459 Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.373269 4793 generic.go:334] "Generic (PLEG): container finished" podID="1676c9d7-42b7-4566-802d-8a3caeef1380" containerID="758c0b434497349273b97b094196de1f2780899ddba9c8a02322bbac7693f7ea" exitCode=0 Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.373480 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mvm2" event={"ID":"1676c9d7-42b7-4566-802d-8a3caeef1380","Type":"ContainerDied","Data":"758c0b434497349273b97b094196de1f2780899ddba9c8a02322bbac7693f7ea"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.373819 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mvm2" event={"ID":"1676c9d7-42b7-4566-802d-8a3caeef1380","Type":"ContainerStarted","Data":"b4694702b0becefe080af1eabb254da552bcaef2c2598d927507aaed8854c459"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.380052 4793 generic.go:334] "Generic (PLEG): container finished" podID="9704070a-bf0b-4232-8e47-933e5f2ed889" containerID="c3fbb5d754d3dccce8ddf5ddc5c725b7921b08aec37862a557213727ee6c0374" exitCode=0 Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.380095 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hr5wt" event={"ID":"9704070a-bf0b-4232-8e47-933e5f2ed889","Type":"ContainerDied","Data":"c3fbb5d754d3dccce8ddf5ddc5c725b7921b08aec37862a557213727ee6c0374"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.382395 4793 generic.go:334] "Generic (PLEG): container finished" podID="af624fd1-30e1-4ee0-8253-89318c76eb99" containerID="6a489804df28de6deeeb5e08a555849320520ffab8da1bb5cc8535d2fede4486" exitCode=0 Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.382490 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snkfp" event={"ID":"af624fd1-30e1-4ee0-8253-89318c76eb99","Type":"ContainerDied","Data":"6a489804df28de6deeeb5e08a555849320520ffab8da1bb5cc8535d2fede4486"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.382522 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snkfp" event={"ID":"af624fd1-30e1-4ee0-8253-89318c76eb99","Type":"ContainerStarted","Data":"0ece0ec0232026ee340f7d742396c61b11fdbac431b6322500a3333dbe6ea1ce"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.386114 4793 generic.go:334] "Generic (PLEG): container finished" podID="84e4ec14-74a8-48d6-a90c-7cf20d345d74" containerID="90a8d45ac7339a2ea2e2d014322638a68c4c742c8c0f8e7218e9114dd99db6f1" exitCode=0 Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.386182 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbctj" event={"ID":"84e4ec14-74a8-48d6-a90c-7cf20d345d74","Type":"ContainerDied","Data":"90a8d45ac7339a2ea2e2d014322638a68c4c742c8c0f8e7218e9114dd99db6f1"} Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.587980 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gckjm" Jan 26 22:46:27 crc kubenswrapper[4793]: I0126 22:46:27.635248 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:46:28 crc kubenswrapper[4793]: I0126 22:46:28.396756 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hr5wt" event={"ID":"9704070a-bf0b-4232-8e47-933e5f2ed889","Type":"ContainerStarted","Data":"ee6617132f505d1e7567cfdf38d75cbef6294312e494f25d43941f90c393a028"} Jan 26 22:46:28 crc kubenswrapper[4793]: I0126 22:46:28.405007 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbctj" event={"ID":"84e4ec14-74a8-48d6-a90c-7cf20d345d74","Type":"ContainerStarted","Data":"163c2ab0e0191216d8f02d1459fc6fbf86be3836e60e692d215d828539256663"} Jan 26 22:46:28 crc kubenswrapper[4793]: I0126 22:46:28.410821 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mvm2" event={"ID":"1676c9d7-42b7-4566-802d-8a3caeef1380","Type":"ContainerStarted","Data":"42d8b739a79864a9d0d175be22082b9d531fa264f9867b85eba48530e8d19ee7"} Jan 26 22:46:28 crc kubenswrapper[4793]: I0126 22:46:28.420466 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hr5wt" podStartSLOduration=1.901678574 podStartE2EDuration="4.420447064s" podCreationTimestamp="2026-01-26 22:46:24 +0000 UTC" firstStartedPulling="2026-01-26 22:46:25.356868183 +0000 UTC m=+400.345639695" lastFinishedPulling="2026-01-26 22:46:27.875636673 +0000 UTC m=+402.864408185" observedRunningTime="2026-01-26 22:46:28.418634352 +0000 UTC m=+403.407405864" watchObservedRunningTime="2026-01-26 22:46:28.420447064 +0000 UTC m=+403.409218586" Jan 26 22:46:28 crc kubenswrapper[4793]: I0126 22:46:28.453668 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dbctj" podStartSLOduration=2.813344661 podStartE2EDuration="5.453637833s" podCreationTimestamp="2026-01-26 22:46:23 +0000 UTC" firstStartedPulling="2026-01-26 22:46:25.357956174 +0000 UTC m=+400.346727686" lastFinishedPulling="2026-01-26 22:46:27.998249346 +0000 UTC m=+402.987020858" observedRunningTime="2026-01-26 22:46:28.433506572 +0000 UTC m=+403.422278084" watchObservedRunningTime="2026-01-26 22:46:28.453637833 +0000 UTC m=+403.442409345" Jan 26 22:46:29 crc kubenswrapper[4793]: I0126 22:46:29.419227 4793 generic.go:334] "Generic (PLEG): container finished" podID="af624fd1-30e1-4ee0-8253-89318c76eb99" containerID="94571a91784d48e98084ef42017ea96b14d8bc6aa4045ace890a13e0a86e7ea3" exitCode=0 Jan 26 22:46:29 crc kubenswrapper[4793]: I0126 22:46:29.419315 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snkfp" event={"ID":"af624fd1-30e1-4ee0-8253-89318c76eb99","Type":"ContainerDied","Data":"94571a91784d48e98084ef42017ea96b14d8bc6aa4045ace890a13e0a86e7ea3"} Jan 26 22:46:29 crc kubenswrapper[4793]: I0126 22:46:29.421467 4793 generic.go:334] "Generic (PLEG): container finished" podID="1676c9d7-42b7-4566-802d-8a3caeef1380" containerID="42d8b739a79864a9d0d175be22082b9d531fa264f9867b85eba48530e8d19ee7" exitCode=0 Jan 26 22:46:29 crc kubenswrapper[4793]: I0126 22:46:29.421617 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mvm2" event={"ID":"1676c9d7-42b7-4566-802d-8a3caeef1380","Type":"ContainerDied","Data":"42d8b739a79864a9d0d175be22082b9d531fa264f9867b85eba48530e8d19ee7"} Jan 26 22:46:31 crc kubenswrapper[4793]: I0126 22:46:31.436041 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mvm2" event={"ID":"1676c9d7-42b7-4566-802d-8a3caeef1380","Type":"ContainerStarted","Data":"3cbd815e9b3436b771c9bd9537cb9fe0e9a4d751972eca6a8809a227f579795d"} Jan 26 22:46:31 crc kubenswrapper[4793]: I0126 22:46:31.439001 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snkfp" event={"ID":"af624fd1-30e1-4ee0-8253-89318c76eb99","Type":"ContainerStarted","Data":"c3da0430e2a945b19db7a4844a21ccb289ae8544897872eabf3e52935b6d9741"} Jan 26 22:46:31 crc kubenswrapper[4793]: I0126 22:46:31.457310 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6mvm2" podStartSLOduration=2.417306504 podStartE2EDuration="5.457289993s" podCreationTimestamp="2026-01-26 22:46:26 +0000 UTC" firstStartedPulling="2026-01-26 22:46:27.375728791 +0000 UTC m=+402.364500303" lastFinishedPulling="2026-01-26 22:46:30.41571228 +0000 UTC m=+405.404483792" observedRunningTime="2026-01-26 22:46:31.453322438 +0000 UTC m=+406.442093950" watchObservedRunningTime="2026-01-26 22:46:31.457289993 +0000 UTC m=+406.446061495" Jan 26 22:46:31 crc kubenswrapper[4793]: I0126 22:46:31.473210 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-snkfp" podStartSLOduration=2.206127173 podStartE2EDuration="5.473170702s" podCreationTimestamp="2026-01-26 22:46:26 +0000 UTC" firstStartedPulling="2026-01-26 22:46:27.383771284 +0000 UTC m=+402.372542796" lastFinishedPulling="2026-01-26 22:46:30.650814813 +0000 UTC m=+405.639586325" observedRunningTime="2026-01-26 22:46:31.473134341 +0000 UTC m=+406.461905863" watchObservedRunningTime="2026-01-26 22:46:31.473170702 +0000 UTC m=+406.461942214" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.226760 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.227488 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.271994 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.420536 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.420613 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.468907 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.513843 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dbctj" Jan 26 22:46:34 crc kubenswrapper[4793]: I0126 22:46:34.518283 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hr5wt" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.612155 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.612645 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.653995 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.821684 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.821901 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:36 crc kubenswrapper[4793]: I0126 22:46:36.859509 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:37 crc kubenswrapper[4793]: I0126 22:46:37.524599 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6mvm2" Jan 26 22:46:37 crc kubenswrapper[4793]: I0126 22:46:37.540878 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-snkfp" Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.323018 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.325512 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.325791 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.327080 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.327440 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf" gracePeriod=600 Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.546264 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf" exitCode=0 Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.546338 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf"} Jan 26 22:46:48 crc kubenswrapper[4793]: I0126 22:46:48.546473 4793 scope.go:117] "RemoveContainer" containerID="f4e5151a32e065f3cd82311498f302cce588753ebdc9261e88bbba83aa3d5944" Jan 26 22:46:49 crc kubenswrapper[4793]: I0126 22:46:49.556431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe"} Jan 26 22:46:52 crc kubenswrapper[4793]: I0126 22:46:52.673075 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" podUID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" containerName="registry" containerID="cri-o://3d5ea8649cdaffecebca0a6eab6267672b0e1746c4d8f1584f0ae6957690d95a" gracePeriod=30 Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.587744 4793 generic.go:334] "Generic (PLEG): container finished" podID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" containerID="3d5ea8649cdaffecebca0a6eab6267672b0e1746c4d8f1584f0ae6957690d95a" exitCode=0 Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.587826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" event={"ID":"ceed8696-7889-4e56-b430-dc4a6d46e1e6","Type":"ContainerDied","Data":"3d5ea8649cdaffecebca0a6eab6267672b0e1746c4d8f1584f0ae6957690d95a"} Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.704095 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.764325 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.764796 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.764912 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.764979 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.765046 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.765107 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.765491 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldcbw\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.765765 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates\") pod \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\" (UID: \"ceed8696-7889-4e56-b430-dc4a6d46e1e6\") " Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.772872 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.774360 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.776415 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.781963 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.786055 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw" (OuterVolumeSpecName: "kube-api-access-ldcbw") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "kube-api-access-ldcbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.786305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.786801 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.803783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ceed8696-7889-4e56-b430-dc4a6d46e1e6" (UID: "ceed8696-7889-4e56-b430-dc4a6d46e1e6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871788 4793 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ceed8696-7889-4e56-b430-dc4a6d46e1e6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871836 4793 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ceed8696-7889-4e56-b430-dc4a6d46e1e6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871850 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871858 4793 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871867 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldcbw\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-kube-api-access-ldcbw\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871877 4793 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ceed8696-7889-4e56-b430-dc4a6d46e1e6-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:53 crc kubenswrapper[4793]: I0126 22:46:53.871923 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ceed8696-7889-4e56-b430-dc4a6d46e1e6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 22:46:54 crc kubenswrapper[4793]: I0126 22:46:54.598379 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" event={"ID":"ceed8696-7889-4e56-b430-dc4a6d46e1e6","Type":"ContainerDied","Data":"4db1e2f2c7947383605b71ec06e390f57d260fbd62a9488814232435a291d5db"} Jan 26 22:46:54 crc kubenswrapper[4793]: I0126 22:46:54.598466 4793 scope.go:117] "RemoveContainer" containerID="3d5ea8649cdaffecebca0a6eab6267672b0e1746c4d8f1584f0ae6957690d95a" Jan 26 22:46:54 crc kubenswrapper[4793]: I0126 22:46:54.598562 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s8pqg" Jan 26 22:46:54 crc kubenswrapper[4793]: I0126 22:46:54.641014 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:46:54 crc kubenswrapper[4793]: I0126 22:46:54.645372 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s8pqg"] Jan 26 22:46:55 crc kubenswrapper[4793]: I0126 22:46:55.773578 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" path="/var/lib/kubelet/pods/ceed8696-7889-4e56-b430-dc4a6d46e1e6/volumes" Jan 26 22:49:18 crc kubenswrapper[4793]: I0126 22:49:18.322513 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:49:18 crc kubenswrapper[4793]: I0126 22:49:18.323358 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:49:46 crc kubenswrapper[4793]: I0126 22:49:46.053237 4793 scope.go:117] "RemoveContainer" containerID="4e9a91212ef600945af935beead74da745fac2e85779d11a2bbe732917961c99" Jan 26 22:49:48 crc kubenswrapper[4793]: I0126 22:49:48.323025 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:49:48 crc kubenswrapper[4793]: I0126 22:49:48.323151 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:50:18 crc kubenswrapper[4793]: I0126 22:50:18.322615 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:50:18 crc kubenswrapper[4793]: I0126 22:50:18.323663 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:50:18 crc kubenswrapper[4793]: I0126 22:50:18.323737 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:50:18 crc kubenswrapper[4793]: I0126 22:50:18.324628 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:50:18 crc kubenswrapper[4793]: I0126 22:50:18.324704 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe" gracePeriod=600 Jan 26 22:50:19 crc kubenswrapper[4793]: I0126 22:50:19.110351 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe" exitCode=0 Jan 26 22:50:19 crc kubenswrapper[4793]: I0126 22:50:19.110643 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe"} Jan 26 22:50:19 crc kubenswrapper[4793]: I0126 22:50:19.111215 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37"} Jan 26 22:50:19 crc kubenswrapper[4793]: I0126 22:50:19.111256 4793 scope.go:117] "RemoveContainer" containerID="7075b8e2b6392f8f5dd779c1353cdbef01daae1157751de798f0a136ad2034cf" Jan 26 22:52:18 crc kubenswrapper[4793]: I0126 22:52:18.322759 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:52:18 crc kubenswrapper[4793]: I0126 22:52:18.323581 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:52:30 crc kubenswrapper[4793]: I0126 22:52:30.498723 4793 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.067347 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:52:41 crc kubenswrapper[4793]: E0126 22:52:41.070795 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" containerName="registry" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.070862 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" containerName="registry" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.071057 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceed8696-7889-4e56-b430-dc4a6d46e1e6" containerName="registry" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.072483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.075276 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.136278 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.136333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvx26\" (UniqueName: \"kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.136386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.237243 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.237320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvx26\" (UniqueName: \"kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.237404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.237663 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.237927 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.274526 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvx26\" (UniqueName: \"kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26\") pod \"redhat-operators-sgsmk\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.405479 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:41 crc kubenswrapper[4793]: I0126 22:52:41.657720 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:52:42 crc kubenswrapper[4793]: I0126 22:52:42.127747 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ddad55e-ec81-4cca-9993-f266aceed079" containerID="ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46" exitCode=0 Jan 26 22:52:42 crc kubenswrapper[4793]: I0126 22:52:42.127828 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerDied","Data":"ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46"} Jan 26 22:52:42 crc kubenswrapper[4793]: I0126 22:52:42.128318 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerStarted","Data":"d6f759f878c836ad8b69979ff57c86e197b44603edc5e72c519f0704218c6685"} Jan 26 22:52:42 crc kubenswrapper[4793]: I0126 22:52:42.129999 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 22:52:43 crc kubenswrapper[4793]: I0126 22:52:43.137043 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerStarted","Data":"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69"} Jan 26 22:52:44 crc kubenswrapper[4793]: I0126 22:52:44.146540 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ddad55e-ec81-4cca-9993-f266aceed079" containerID="3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69" exitCode=0 Jan 26 22:52:44 crc kubenswrapper[4793]: I0126 22:52:44.146609 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerDied","Data":"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69"} Jan 26 22:52:45 crc kubenswrapper[4793]: I0126 22:52:45.166659 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerStarted","Data":"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694"} Jan 26 22:52:45 crc kubenswrapper[4793]: I0126 22:52:45.195173 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sgsmk" podStartSLOduration=1.7590943989999999 podStartE2EDuration="4.195149122s" podCreationTimestamp="2026-01-26 22:52:41 +0000 UTC" firstStartedPulling="2026-01-26 22:52:42.129728041 +0000 UTC m=+777.118499553" lastFinishedPulling="2026-01-26 22:52:44.565782754 +0000 UTC m=+779.554554276" observedRunningTime="2026-01-26 22:52:45.191612421 +0000 UTC m=+780.180383993" watchObservedRunningTime="2026-01-26 22:52:45.195149122 +0000 UTC m=+780.183920644" Jan 26 22:52:48 crc kubenswrapper[4793]: I0126 22:52:48.322735 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:52:48 crc kubenswrapper[4793]: I0126 22:52:48.323363 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:52:51 crc kubenswrapper[4793]: I0126 22:52:51.405810 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:51 crc kubenswrapper[4793]: I0126 22:52:51.407559 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:52:52 crc kubenswrapper[4793]: I0126 22:52:52.472156 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sgsmk" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="registry-server" probeResult="failure" output=< Jan 26 22:52:52 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 26 22:52:52 crc kubenswrapper[4793]: > Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.498088 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8"] Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.499107 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.501699 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.514100 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8"] Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.655964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgbrs\" (UniqueName: \"kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.656026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.656097 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.757576 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.757752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgbrs\" (UniqueName: \"kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.758020 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.758417 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.759042 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.798810 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgbrs\" (UniqueName: \"kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:53 crc kubenswrapper[4793]: I0126 22:52:53.817453 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:52:54 crc kubenswrapper[4793]: I0126 22:52:54.131814 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8"] Jan 26 22:52:54 crc kubenswrapper[4793]: I0126 22:52:54.228222 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerStarted","Data":"6079632c3586a032fad505fc58c9bdf543c6c0f9d0096be920c7b9066b0c0faf"} Jan 26 22:52:55 crc kubenswrapper[4793]: I0126 22:52:55.268074 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerStarted","Data":"525077eadc69cbb034a38a46f77a91f42674d4880436ff53f1c19a084bb46a3e"} Jan 26 22:52:56 crc kubenswrapper[4793]: I0126 22:52:56.278049 4793 generic.go:334] "Generic (PLEG): container finished" podID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerID="525077eadc69cbb034a38a46f77a91f42674d4880436ff53f1c19a084bb46a3e" exitCode=0 Jan 26 22:52:56 crc kubenswrapper[4793]: I0126 22:52:56.278172 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerDied","Data":"525077eadc69cbb034a38a46f77a91f42674d4880436ff53f1c19a084bb46a3e"} Jan 26 22:52:58 crc kubenswrapper[4793]: I0126 22:52:58.294250 4793 generic.go:334] "Generic (PLEG): container finished" podID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerID="0fcce5b41356fa44cb98dfad8e9a0f443746aa0a93940a91a8100514e59a8a51" exitCode=0 Jan 26 22:52:58 crc kubenswrapper[4793]: I0126 22:52:58.294321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerDied","Data":"0fcce5b41356fa44cb98dfad8e9a0f443746aa0a93940a91a8100514e59a8a51"} Jan 26 22:52:59 crc kubenswrapper[4793]: I0126 22:52:59.306321 4793 generic.go:334] "Generic (PLEG): container finished" podID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerID="da0860d1b2c1f834cd5dbb6bd0281a5792b04c2becc3f164bef7044c14b718b5" exitCode=0 Jan 26 22:52:59 crc kubenswrapper[4793]: I0126 22:52:59.306408 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerDied","Data":"da0860d1b2c1f834cd5dbb6bd0281a5792b04c2becc3f164bef7044c14b718b5"} Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.626693 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.692256 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgbrs\" (UniqueName: \"kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs\") pod \"e776cde1-cc4e-40ce-a7b4-eef264360331\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.692413 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle\") pod \"e776cde1-cc4e-40ce-a7b4-eef264360331\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.692488 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util\") pod \"e776cde1-cc4e-40ce-a7b4-eef264360331\" (UID: \"e776cde1-cc4e-40ce-a7b4-eef264360331\") " Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.693014 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle" (OuterVolumeSpecName: "bundle") pod "e776cde1-cc4e-40ce-a7b4-eef264360331" (UID: "e776cde1-cc4e-40ce-a7b4-eef264360331"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.705164 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs" (OuterVolumeSpecName: "kube-api-access-bgbrs") pod "e776cde1-cc4e-40ce-a7b4-eef264360331" (UID: "e776cde1-cc4e-40ce-a7b4-eef264360331"). InnerVolumeSpecName "kube-api-access-bgbrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.718218 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util" (OuterVolumeSpecName: "util") pod "e776cde1-cc4e-40ce-a7b4-eef264360331" (UID: "e776cde1-cc4e-40ce-a7b4-eef264360331"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.793735 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-util\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.793777 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgbrs\" (UniqueName: \"kubernetes.io/projected/e776cde1-cc4e-40ce-a7b4-eef264360331-kube-api-access-bgbrs\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:00 crc kubenswrapper[4793]: I0126 22:53:00.793798 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e776cde1-cc4e-40ce-a7b4-eef264360331-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.323470 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" event={"ID":"e776cde1-cc4e-40ce-a7b4-eef264360331","Type":"ContainerDied","Data":"6079632c3586a032fad505fc58c9bdf543c6c0f9d0096be920c7b9066b0c0faf"} Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.323531 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6079632c3586a032fad505fc58c9bdf543c6c0f9d0096be920c7b9066b0c0faf" Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.323555 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wmzv8" Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.471709 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.543427 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.721207 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwtbk"] Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722118 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-controller" containerID="cri-o://9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722550 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="sbdb" containerID="cri-o://500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722594 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="nbdb" containerID="cri-o://aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722635 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="northd" containerID="cri-o://edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722675 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722714 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-node" containerID="cri-o://e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.722755 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-acl-logging" containerID="cri-o://35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" gracePeriod=30 Jan 26 22:53:01 crc kubenswrapper[4793]: I0126 22:53:01.771944 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" containerID="cri-o://89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" gracePeriod=30 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.035803 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/3.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.039434 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovn-acl-logging/0.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.040284 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovn-controller/0.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.041205 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111160 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lj9ws"] Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111236 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111319 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111353 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111375 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111396 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111425 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111448 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111453 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111465 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111470 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111481 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-node" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111492 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-node" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111501 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-acl-logging" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111508 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-acl-logging" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111523 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="nbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111530 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="nbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111539 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="sbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111546 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="sbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111564 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="extract" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111573 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="extract" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111586 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111594 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111601 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kubecfg-setup" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111608 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kubecfg-setup" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111618 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111624 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111634 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111641 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111649 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="northd" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111657 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="northd" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111666 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="util" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111672 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="util" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111679 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111686 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111696 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111703 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111709 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="pull" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111716 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="pull" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.111727 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111734 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111481 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111846 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111861 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="nbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111871 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="northd" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111882 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111891 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111900 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="sbdb" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111878 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111913 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111922 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111932 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111940 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovnkube-controller" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111949 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="ovn-acl-logging" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111952 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112002 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111957 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerName="kube-rbac-proxy-node" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112053 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e776cde1-cc4e-40ce-a7b4-eef264360331" containerName="extract" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112096 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112125 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112165 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112219 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112245 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112307 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112331 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97spd\" (UniqueName: \"kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112392 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch\") pod \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\" (UID: \"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf\") " Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111527 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111546 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111563 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111577 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111592 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111607 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111903 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.111959 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket" (OuterVolumeSpecName: "log-socket") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112785 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.112808 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113352 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113693 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113719 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113739 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash" (OuterVolumeSpecName: "host-slash") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113759 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log" (OuterVolumeSpecName: "node-log") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.113801 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.114234 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.116788 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.120466 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd" (OuterVolumeSpecName: "kube-api-access-97spd") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "kube-api-access-97spd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.136665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" (UID: "358c250d-f5aa-4f0f-9fa5-7b699e6c73bf"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.213944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-bin\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214061 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-ovn\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214144 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-systemd-units\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214185 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-script-lib\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214324 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-env-overrides\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-kubelet\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214524 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-var-lib-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214616 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-slash\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scgdq\" (UniqueName: \"kubernetes.io/projected/02e40a32-f700-4300-a55e-7ba957dd8ab8-kube-api-access-scgdq\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214780 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214842 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-netns\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.214932 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-node-log\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215023 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-systemd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215062 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-config\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215127 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-log-socket\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215269 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovn-node-metrics-cert\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-netd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-etc-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215455 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215478 4793 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215499 4793 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215518 4793 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215539 4793 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215557 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97spd\" (UniqueName: \"kubernetes.io/projected/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-kube-api-access-97spd\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215575 4793 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215591 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215603 4793 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215618 4793 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215630 4793 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215642 4793 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215656 4793 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215670 4793 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215682 4793 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215697 4793 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215708 4793 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215721 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215736 4793 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.215751 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.317704 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.317828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-netns\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.317934 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-node-log\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.317975 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.318010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-systemd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.318112 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-systemd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.318126 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-run-netns\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.318277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-node-log\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.320377 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-config\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.320531 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-config\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.320793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.320865 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-log-socket\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.320979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321047 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-log-socket\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321091 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-netd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321133 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovn-node-metrics-cert\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-etc-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321373 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-bin\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321429 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-ovn\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-etc-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-netd\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321530 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-cni-bin\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321593 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-systemd-units\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-script-lib\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-run-ovn\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-systemd-units\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321870 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-env-overrides\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.321973 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-kubelet\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322059 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-kubelet\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-var-lib-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322575 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322466 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-var-lib-openvswitch\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322672 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-slash\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322909 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scgdq\" (UniqueName: \"kubernetes.io/projected/02e40a32-f700-4300-a55e-7ba957dd8ab8-kube-api-access-scgdq\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-slash\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.322769 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02e40a32-f700-4300-a55e-7ba957dd8ab8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.323002 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-env-overrides\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.323078 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovnkube-script-lib\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.328607 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02e40a32-f700-4300-a55e-7ba957dd8ab8-ovn-node-metrics-cert\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.336378 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovnkube-controller/3.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.346259 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovn-acl-logging/0.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347091 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwtbk_358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/ovn-controller/0.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347733 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347773 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347791 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347806 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347820 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347833 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" exitCode=0 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347848 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" exitCode=143 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347856 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347865 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347964 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347862 4793 generic.go:334] "Generic (PLEG): container finished" podID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" exitCode=143 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.347992 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348150 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348238 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348285 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348316 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348334 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348351 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348367 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348378 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348389 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348401 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348411 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348449 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348466 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348479 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348497 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348509 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348521 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348531 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348542 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348553 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348564 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348579 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348601 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348613 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348625 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348636 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348650 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348661 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348673 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348685 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348016 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348698 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348811 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348830 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwtbk" event={"ID":"358c250d-f5aa-4f0f-9fa5-7b699e6c73bf","Type":"ContainerDied","Data":"972d2d9dd7c3ad94fb7b559afde32870a71392b37532ff0117112547580522c5"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348850 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348864 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348879 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348890 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348900 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348911 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348921 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348932 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348943 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.348954 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.353051 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/2.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.354007 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/1.log" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.354080 4793 generic.go:334] "Generic (PLEG): container finished" podID="2e6daa0d-7641-46e1-b9ab-8479c1cd00d6" containerID="99049525e6cacaf6c4ab17030e1fd8c38dba224b6ec01ee662a23e82658ca382" exitCode=2 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.354226 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerDied","Data":"99049525e6cacaf6c4ab17030e1fd8c38dba224b6ec01ee662a23e82658ca382"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.354355 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298"} Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.355236 4793 scope.go:117] "RemoveContainer" containerID="99049525e6cacaf6c4ab17030e1fd8c38dba224b6ec01ee662a23e82658ca382" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.358304 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.360543 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scgdq\" (UniqueName: \"kubernetes.io/projected/02e40a32-f700-4300-a55e-7ba957dd8ab8-kube-api-access-scgdq\") pod \"ovnkube-node-lj9ws\" (UID: \"02e40a32-f700-4300-a55e-7ba957dd8ab8\") " pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.397697 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.415893 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwtbk"] Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.423422 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwtbk"] Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.430330 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.460428 4793 scope.go:117] "RemoveContainer" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.476134 4793 scope.go:117] "RemoveContainer" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: W0126 22:53:02.483297 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e40a32_f700_4300_a55e_7ba957dd8ab8.slice/crio-2b3be4ff435998f584f6234f4c1d375eb23bb5bd614f3154918d53d007f0f455 WatchSource:0}: Error finding container 2b3be4ff435998f584f6234f4c1d375eb23bb5bd614f3154918d53d007f0f455: Status 404 returned error can't find the container with id 2b3be4ff435998f584f6234f4c1d375eb23bb5bd614f3154918d53d007f0f455 Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.494515 4793 scope.go:117] "RemoveContainer" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.530710 4793 scope.go:117] "RemoveContainer" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.555875 4793 scope.go:117] "RemoveContainer" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.586020 4793 scope.go:117] "RemoveContainer" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.609590 4793 scope.go:117] "RemoveContainer" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.631928 4793 scope.go:117] "RemoveContainer" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.649729 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.650253 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.650334 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} err="failed to get container status \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.650385 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.651034 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": container with ID starting with 7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23 not found: ID does not exist" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.651079 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} err="failed to get container status \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": rpc error: code = NotFound desc = could not find container \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": container with ID starting with 7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.651123 4793 scope.go:117] "RemoveContainer" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.651554 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": container with ID starting with 500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b not found: ID does not exist" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.651628 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} err="failed to get container status \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": rpc error: code = NotFound desc = could not find container \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": container with ID starting with 500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.651673 4793 scope.go:117] "RemoveContainer" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.652479 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": container with ID starting with aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01 not found: ID does not exist" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.652525 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} err="failed to get container status \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": rpc error: code = NotFound desc = could not find container \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": container with ID starting with aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.652564 4793 scope.go:117] "RemoveContainer" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.652957 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": container with ID starting with edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f not found: ID does not exist" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653042 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} err="failed to get container status \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": rpc error: code = NotFound desc = could not find container \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": container with ID starting with edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653090 4793 scope.go:117] "RemoveContainer" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.653478 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": container with ID starting with ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c not found: ID does not exist" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653520 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} err="failed to get container status \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": rpc error: code = NotFound desc = could not find container \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": container with ID starting with ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653550 4793 scope.go:117] "RemoveContainer" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.653852 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": container with ID starting with e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff not found: ID does not exist" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653895 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} err="failed to get container status \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": rpc error: code = NotFound desc = could not find container \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": container with ID starting with e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.653918 4793 scope.go:117] "RemoveContainer" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.654182 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": container with ID starting with 35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130 not found: ID does not exist" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.654235 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} err="failed to get container status \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": rpc error: code = NotFound desc = could not find container \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": container with ID starting with 35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.654257 4793 scope.go:117] "RemoveContainer" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.654611 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": container with ID starting with 9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f not found: ID does not exist" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.654653 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} err="failed to get container status \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": rpc error: code = NotFound desc = could not find container \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": container with ID starting with 9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.654675 4793 scope.go:117] "RemoveContainer" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: E0126 22:53:02.654982 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": container with ID starting with 7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef not found: ID does not exist" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655020 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} err="failed to get container status \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": rpc error: code = NotFound desc = could not find container \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": container with ID starting with 7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655051 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655358 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} err="failed to get container status \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655388 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655699 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} err="failed to get container status \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": rpc error: code = NotFound desc = could not find container \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": container with ID starting with 7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.655798 4793 scope.go:117] "RemoveContainer" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656116 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} err="failed to get container status \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": rpc error: code = NotFound desc = could not find container \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": container with ID starting with 500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656171 4793 scope.go:117] "RemoveContainer" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656519 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} err="failed to get container status \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": rpc error: code = NotFound desc = could not find container \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": container with ID starting with aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656553 4793 scope.go:117] "RemoveContainer" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656822 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} err="failed to get container status \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": rpc error: code = NotFound desc = could not find container \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": container with ID starting with edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.656857 4793 scope.go:117] "RemoveContainer" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.657099 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} err="failed to get container status \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": rpc error: code = NotFound desc = could not find container \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": container with ID starting with ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.657128 4793 scope.go:117] "RemoveContainer" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.657681 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} err="failed to get container status \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": rpc error: code = NotFound desc = could not find container \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": container with ID starting with e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.657732 4793 scope.go:117] "RemoveContainer" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.658290 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} err="failed to get container status \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": rpc error: code = NotFound desc = could not find container \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": container with ID starting with 35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.658341 4793 scope.go:117] "RemoveContainer" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.658664 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} err="failed to get container status \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": rpc error: code = NotFound desc = could not find container \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": container with ID starting with 9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.658706 4793 scope.go:117] "RemoveContainer" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.658993 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} err="failed to get container status \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": rpc error: code = NotFound desc = could not find container \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": container with ID starting with 7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659028 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659314 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} err="failed to get container status \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659351 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659573 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} err="failed to get container status \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": rpc error: code = NotFound desc = could not find container \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": container with ID starting with 7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659608 4793 scope.go:117] "RemoveContainer" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659842 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} err="failed to get container status \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": rpc error: code = NotFound desc = could not find container \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": container with ID starting with 500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.659871 4793 scope.go:117] "RemoveContainer" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660155 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} err="failed to get container status \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": rpc error: code = NotFound desc = could not find container \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": container with ID starting with aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660215 4793 scope.go:117] "RemoveContainer" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660537 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} err="failed to get container status \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": rpc error: code = NotFound desc = could not find container \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": container with ID starting with edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660577 4793 scope.go:117] "RemoveContainer" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660877 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} err="failed to get container status \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": rpc error: code = NotFound desc = could not find container \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": container with ID starting with ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.660912 4793 scope.go:117] "RemoveContainer" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.661168 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} err="failed to get container status \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": rpc error: code = NotFound desc = could not find container \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": container with ID starting with e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.661283 4793 scope.go:117] "RemoveContainer" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.661682 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} err="failed to get container status \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": rpc error: code = NotFound desc = could not find container \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": container with ID starting with 35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.661715 4793 scope.go:117] "RemoveContainer" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.662014 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} err="failed to get container status \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": rpc error: code = NotFound desc = could not find container \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": container with ID starting with 9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.662046 4793 scope.go:117] "RemoveContainer" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.662792 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} err="failed to get container status \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": rpc error: code = NotFound desc = could not find container \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": container with ID starting with 7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.662826 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.663491 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} err="failed to get container status \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.663552 4793 scope.go:117] "RemoveContainer" containerID="7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.663883 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23"} err="failed to get container status \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": rpc error: code = NotFound desc = could not find container \"7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23\": container with ID starting with 7aea77447c667dcebb1099bcc1e8a2f361659e7c2ce0ba58008f46bfea8d5b23 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.663925 4793 scope.go:117] "RemoveContainer" containerID="500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664237 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b"} err="failed to get container status \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": rpc error: code = NotFound desc = could not find container \"500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b\": container with ID starting with 500183e3e07fa916a8ecf8a90943bfaef1ccecbdbd0c357fd8ab3b868550ac6b not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664275 4793 scope.go:117] "RemoveContainer" containerID="aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664627 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01"} err="failed to get container status \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": rpc error: code = NotFound desc = could not find container \"aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01\": container with ID starting with aae0d47e481d356dc959bba790a8641c18560e33e4f2497ced78e14e08c7de01 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664667 4793 scope.go:117] "RemoveContainer" containerID="edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664944 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f"} err="failed to get container status \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": rpc error: code = NotFound desc = could not find container \"edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f\": container with ID starting with edece84e8a189ba3cd0d2e20cffa34ec6124c158f8bcda82cb20a6d5993d0e4f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.664983 4793 scope.go:117] "RemoveContainer" containerID="ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.665259 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c"} err="failed to get container status \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": rpc error: code = NotFound desc = could not find container \"ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c\": container with ID starting with ec1ffb1e55edc279dd0951e06842c4ae7aa086f25a464774a9d938061be9ea0c not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.665296 4793 scope.go:117] "RemoveContainer" containerID="e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.666365 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff"} err="failed to get container status \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": rpc error: code = NotFound desc = could not find container \"e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff\": container with ID starting with e2c6ed54c8be476887dbe2666fdb637a6d59b8feabe9e3573638186af2d840ff not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.666408 4793 scope.go:117] "RemoveContainer" containerID="35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.666685 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130"} err="failed to get container status \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": rpc error: code = NotFound desc = could not find container \"35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130\": container with ID starting with 35066da419101c6c77c5c35aa4c6bb908fe7572e0bc7f1a72707a22ff076e130 not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.666717 4793 scope.go:117] "RemoveContainer" containerID="9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.667003 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f"} err="failed to get container status \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": rpc error: code = NotFound desc = could not find container \"9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f\": container with ID starting with 9b3f55674dc6aef0cfb474a6e7c8fda94080a674aed0bea2fd5a858fe08ac86f not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.667040 4793 scope.go:117] "RemoveContainer" containerID="7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.669733 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef"} err="failed to get container status \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": rpc error: code = NotFound desc = could not find container \"7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef\": container with ID starting with 7b6f2a73d69d4652cd48bd88fa69a4da6a1f47a486d4b9064db36b7caaac8aef not found: ID does not exist" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.669776 4793 scope.go:117] "RemoveContainer" containerID="89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429" Jan 26 22:53:02 crc kubenswrapper[4793]: I0126 22:53:02.670272 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429"} err="failed to get container status \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": rpc error: code = NotFound desc = could not find container \"89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429\": container with ID starting with 89dbc31294d290c69296f75974dc72a75134570b9105a8b5c17b7e0dfc3f4429 not found: ID does not exist" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.365649 4793 generic.go:334] "Generic (PLEG): container finished" podID="02e40a32-f700-4300-a55e-7ba957dd8ab8" containerID="418f5003965d98f496ddfdedc3c00d6c4e91063468638c05a5c59146de39f241" exitCode=0 Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.365734 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerDied","Data":"418f5003965d98f496ddfdedc3c00d6c4e91063468638c05a5c59146de39f241"} Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.365805 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"2b3be4ff435998f584f6234f4c1d375eb23bb5bd614f3154918d53d007f0f455"} Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.371678 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/2.log" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.373220 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/1.log" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.373510 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sgsmk" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="registry-server" containerID="cri-o://a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694" gracePeriod=2 Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.373638 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l5qgq" event={"ID":"2e6daa0d-7641-46e1-b9ab-8479c1cd00d6","Type":"ContainerStarted","Data":"3b1c3a3e0eb7cd95b6210611bb06d0453b071c6b568c4172bd4e54bd6a201485"} Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.622005 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.744598 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content\") pod \"8ddad55e-ec81-4cca-9993-f266aceed079\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.744649 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvx26\" (UniqueName: \"kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26\") pod \"8ddad55e-ec81-4cca-9993-f266aceed079\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.744698 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities\") pod \"8ddad55e-ec81-4cca-9993-f266aceed079\" (UID: \"8ddad55e-ec81-4cca-9993-f266aceed079\") " Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.747475 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities" (OuterVolumeSpecName: "utilities") pod "8ddad55e-ec81-4cca-9993-f266aceed079" (UID: "8ddad55e-ec81-4cca-9993-f266aceed079"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.764217 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26" (OuterVolumeSpecName: "kube-api-access-kvx26") pod "8ddad55e-ec81-4cca-9993-f266aceed079" (UID: "8ddad55e-ec81-4cca-9993-f266aceed079"). InnerVolumeSpecName "kube-api-access-kvx26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.790748 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="358c250d-f5aa-4f0f-9fa5-7b699e6c73bf" path="/var/lib/kubelet/pods/358c250d-f5aa-4f0f-9fa5-7b699e6c73bf/volumes" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.851638 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.851682 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvx26\" (UniqueName: \"kubernetes.io/projected/8ddad55e-ec81-4cca-9993-f266aceed079-kube-api-access-kvx26\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.877579 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ddad55e-ec81-4cca-9993-f266aceed079" (UID: "8ddad55e-ec81-4cca-9993-f266aceed079"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:53:03 crc kubenswrapper[4793]: I0126 22:53:03.952657 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ddad55e-ec81-4cca-9993-f266aceed079-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.071711 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zj8dj"] Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.071994 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="extract-utilities" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.072018 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="extract-utilities" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.072040 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="registry-server" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.072051 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="registry-server" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.072073 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="extract-content" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.072084 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="extract-content" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.072258 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" containerName="registry-server" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.072858 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.074932 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.075569 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.077013 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-725pq" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.154360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jn5l\" (UniqueName: \"kubernetes.io/projected/786f5045-7022-4802-920e-4fcf30bfe3d7-kube-api-access-7jn5l\") pod \"nmstate-operator-646758c888-zj8dj\" (UID: \"786f5045-7022-4802-920e-4fcf30bfe3d7\") " pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.256230 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jn5l\" (UniqueName: \"kubernetes.io/projected/786f5045-7022-4802-920e-4fcf30bfe3d7-kube-api-access-7jn5l\") pod \"nmstate-operator-646758c888-zj8dj\" (UID: \"786f5045-7022-4802-920e-4fcf30bfe3d7\") " pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.283561 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jn5l\" (UniqueName: \"kubernetes.io/projected/786f5045-7022-4802-920e-4fcf30bfe3d7-kube-api-access-7jn5l\") pod \"nmstate-operator-646758c888-zj8dj\" (UID: \"786f5045-7022-4802-920e-4fcf30bfe3d7\") " pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.380168 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ddad55e-ec81-4cca-9993-f266aceed079" containerID="a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694" exitCode=0 Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.380268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerDied","Data":"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.380267 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgsmk" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.380351 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgsmk" event={"ID":"8ddad55e-ec81-4cca-9993-f266aceed079","Type":"ContainerDied","Data":"d6f759f878c836ad8b69979ff57c86e197b44603edc5e72c519f0704218c6685"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.380387 4793 scope.go:117] "RemoveContainer" containerID="a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.385522 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"2653663f85338cb600ecb28ce7f0adc5ab5c74874b261fb2dc3904e4047be81b"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.385564 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"cdee3dd5ed55562c6bff528f41de27d7572b146238c037b20bde1050515e27e2"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.385578 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"f5099af2b66592de7872934ab3f5f2b78eb2b9bbcce2d4b714d9936d8fd2d32f"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.385587 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"2bfe2a6dadd071071ee5d9867a018948df9ce831afdd622c5ddbe65ed470eb7d"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.385597 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"3ab4022be96ec4162e61306cb10921d6989d7e0c652a20bc74151c9a76f9b361"} Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.391665 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.396489 4793 scope.go:117] "RemoveContainer" containerID="3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.443032 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.445912 4793 scope.go:117] "RemoveContainer" containerID="ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.455874 4793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(3f3c5eb8207f0ee854a6b2d0c97764749995315d7eddd7cd862596e516fb01b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.456504 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(3f3c5eb8207f0ee854a6b2d0c97764749995315d7eddd7cd862596e516fb01b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.456534 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(3f3c5eb8207f0ee854a6b2d0c97764749995315d7eddd7cd862596e516fb01b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.456589 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-zj8dj_openshift-nmstate(786f5045-7022-4802-920e-4fcf30bfe3d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-zj8dj_openshift-nmstate(786f5045-7022-4802-920e-4fcf30bfe3d7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(3f3c5eb8207f0ee854a6b2d0c97764749995315d7eddd7cd862596e516fb01b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" podUID="786f5045-7022-4802-920e-4fcf30bfe3d7" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.459452 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sgsmk"] Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.477645 4793 scope.go:117] "RemoveContainer" containerID="a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.478254 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694\": container with ID starting with a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694 not found: ID does not exist" containerID="a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.478295 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694"} err="failed to get container status \"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694\": rpc error: code = NotFound desc = could not find container \"a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694\": container with ID starting with a2d3a031ea996814aaa207c5a9ac5ec2e4519276185bad239cc1515fb7fb3694 not found: ID does not exist" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.478321 4793 scope.go:117] "RemoveContainer" containerID="3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.478598 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69\": container with ID starting with 3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69 not found: ID does not exist" containerID="3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.478618 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69"} err="failed to get container status \"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69\": rpc error: code = NotFound desc = could not find container \"3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69\": container with ID starting with 3b10e94033848db5f81cc91876f94f0f54ebcc7eefef0f96fa5720a93f3d6a69 not found: ID does not exist" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.478630 4793 scope.go:117] "RemoveContainer" containerID="ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46" Jan 26 22:53:04 crc kubenswrapper[4793]: E0126 22:53:04.478872 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46\": container with ID starting with ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46 not found: ID does not exist" containerID="ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46" Jan 26 22:53:04 crc kubenswrapper[4793]: I0126 22:53:04.478896 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46"} err="failed to get container status \"ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46\": rpc error: code = NotFound desc = could not find container \"ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46\": container with ID starting with ba6c9c4249f46fb60153f39d05d1cfd401a7f0498300b26f7ee23efbb6f21e46 not found: ID does not exist" Jan 26 22:53:05 crc kubenswrapper[4793]: I0126 22:53:05.410518 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"cb80297b2ffb8aff654ac1a4db14467c9e252471cef20a15e27ed7ec85ef9d42"} Jan 26 22:53:05 crc kubenswrapper[4793]: I0126 22:53:05.770381 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ddad55e-ec81-4cca-9993-f266aceed079" path="/var/lib/kubelet/pods/8ddad55e-ec81-4cca-9993-f266aceed079/volumes" Jan 26 22:53:07 crc kubenswrapper[4793]: I0126 22:53:07.435337 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"d780717a5d3c137dcda219498b529539a78ed0d6f243c5eb7a94aff4cee75d4b"} Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.281927 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zj8dj"] Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.282811 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.283275 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:10 crc kubenswrapper[4793]: E0126 22:53:10.328579 4793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(4c26155a77e3957ad14840c906c8aece8daaf7b2a8de1cb2bfe2d6371cb523b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 22:53:10 crc kubenswrapper[4793]: E0126 22:53:10.328671 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(4c26155a77e3957ad14840c906c8aece8daaf7b2a8de1cb2bfe2d6371cb523b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:10 crc kubenswrapper[4793]: E0126 22:53:10.328695 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(4c26155a77e3957ad14840c906c8aece8daaf7b2a8de1cb2bfe2d6371cb523b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:10 crc kubenswrapper[4793]: E0126 22:53:10.328745 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-zj8dj_openshift-nmstate(786f5045-7022-4802-920e-4fcf30bfe3d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-zj8dj_openshift-nmstate(786f5045-7022-4802-920e-4fcf30bfe3d7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-zj8dj_openshift-nmstate_786f5045-7022-4802-920e-4fcf30bfe3d7_0(4c26155a77e3957ad14840c906c8aece8daaf7b2a8de1cb2bfe2d6371cb523b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" podUID="786f5045-7022-4802-920e-4fcf30bfe3d7" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.459723 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" event={"ID":"02e40a32-f700-4300-a55e-7ba957dd8ab8","Type":"ContainerStarted","Data":"73356bdce4930285c79e4b33cde8bcde24fe506c3b8c0f8d21079bf96ba48536"} Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.460164 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.460389 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.460447 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.497800 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.498320 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:10 crc kubenswrapper[4793]: I0126 22:53:10.502498 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" podStartSLOduration=8.502481751 podStartE2EDuration="8.502481751s" podCreationTimestamp="2026-01-26 22:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:53:10.499049744 +0000 UTC m=+805.487821296" watchObservedRunningTime="2026-01-26 22:53:10.502481751 +0000 UTC m=+805.491253263" Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.323969 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.325500 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.325590 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.326745 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.326858 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37" gracePeriod=600 Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.526229 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37" exitCode=0 Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.526299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37"} Jan 26 22:53:18 crc kubenswrapper[4793]: I0126 22:53:18.526683 4793 scope.go:117] "RemoveContainer" containerID="46ee2d2305b3200387efdd7f31d20295da8c90108bda1a6825298a5ec8faa6fe" Jan 26 22:53:19 crc kubenswrapper[4793]: I0126 22:53:19.538995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c"} Jan 26 22:53:24 crc kubenswrapper[4793]: I0126 22:53:24.760499 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:24 crc kubenswrapper[4793]: I0126 22:53:24.761731 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" Jan 26 22:53:25 crc kubenswrapper[4793]: I0126 22:53:25.296147 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zj8dj"] Jan 26 22:53:25 crc kubenswrapper[4793]: I0126 22:53:25.590997 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" event={"ID":"786f5045-7022-4802-920e-4fcf30bfe3d7","Type":"ContainerStarted","Data":"08fa8b4fcfee3759f23e389261de0104c3b691063642d88d8178fb8542826139"} Jan 26 22:53:28 crc kubenswrapper[4793]: I0126 22:53:28.611598 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" event={"ID":"786f5045-7022-4802-920e-4fcf30bfe3d7","Type":"ContainerStarted","Data":"b01b0f63fa7df0d567c0f7b827982e3b8b0f116e6d8ef4d7c63f846350c4f02d"} Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.778311 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-zj8dj" podStartSLOduration=23.482489656 podStartE2EDuration="25.778279638s" podCreationTimestamp="2026-01-26 22:53:04 +0000 UTC" firstStartedPulling="2026-01-26 22:53:25.312411352 +0000 UTC m=+820.301182904" lastFinishedPulling="2026-01-26 22:53:27.608201374 +0000 UTC m=+822.596972886" observedRunningTime="2026-01-26 22:53:28.639688107 +0000 UTC m=+823.628459669" watchObservedRunningTime="2026-01-26 22:53:29.778279638 +0000 UTC m=+824.767051190" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.778600 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kgq9g"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.780357 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.783121 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-zlgg8" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.799012 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.799999 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.804654 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kgq9g"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.805069 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.822131 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66w4g\" (UniqueName: \"kubernetes.io/projected/0399eddc-c943-4f43-8b69-c5b73e14e5c4-kube-api-access-66w4g\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.822207 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.822248 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmv97\" (UniqueName: \"kubernetes.io/projected/4c9631a4-9e0d-4fef-868a-d705499ebac8-kube-api-access-nmv97\") pod \"nmstate-metrics-54757c584b-kgq9g\" (UID: \"4c9631a4-9e0d-4fef-868a-d705499ebac8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.840738 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2xzlk"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.841672 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.844947 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.923473 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-nmstate-lock\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.923771 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmv97\" (UniqueName: \"kubernetes.io/projected/4c9631a4-9e0d-4fef-868a-d705499ebac8-kube-api-access-nmv97\") pod \"nmstate-metrics-54757c584b-kgq9g\" (UID: \"4c9631a4-9e0d-4fef-868a-d705499ebac8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.923944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh4q6\" (UniqueName: \"kubernetes.io/projected/c6de759f-205a-4bf1-ba92-843e9725d254-kube-api-access-jh4q6\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.924047 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66w4g\" (UniqueName: \"kubernetes.io/projected/0399eddc-c943-4f43-8b69-c5b73e14e5c4-kube-api-access-66w4g\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.924199 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-dbus-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.924297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-ovs-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.924415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: E0126 22:53:29.924525 4793 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 26 22:53:29 crc kubenswrapper[4793]: E0126 22:53:29.924603 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair podName:0399eddc-c943-4f43-8b69-c5b73e14e5c4 nodeName:}" failed. No retries permitted until 2026-01-26 22:53:30.424582359 +0000 UTC m=+825.413353881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-9q92v" (UID: "0399eddc-c943-4f43-8b69-c5b73e14e5c4") : secret "openshift-nmstate-webhook" not found Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.940441 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g"] Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.941287 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.946512 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-65dfd" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.947812 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.948020 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.957621 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmv97\" (UniqueName: \"kubernetes.io/projected/4c9631a4-9e0d-4fef-868a-d705499ebac8-kube-api-access-nmv97\") pod \"nmstate-metrics-54757c584b-kgq9g\" (UID: \"4c9631a4-9e0d-4fef-868a-d705499ebac8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.962401 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66w4g\" (UniqueName: \"kubernetes.io/projected/0399eddc-c943-4f43-8b69-c5b73e14e5c4-kube-api-access-66w4g\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:29 crc kubenswrapper[4793]: I0126 22:53:29.964138 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g"] Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025288 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztplc\" (UniqueName: \"kubernetes.io/projected/a3a28baa-a522-4cc7-b853-037d96cf0590-kube-api-access-ztplc\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025329 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025367 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh4q6\" (UniqueName: \"kubernetes.io/projected/c6de759f-205a-4bf1-ba92-843e9725d254-kube-api-access-jh4q6\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025392 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a3a28baa-a522-4cc7-b853-037d96cf0590-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-dbus-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025442 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-ovs-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-nmstate-lock\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025548 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-nmstate-lock\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.025994 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-ovs-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.026079 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c6de759f-205a-4bf1-ba92-843e9725d254-dbus-socket\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.043202 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh4q6\" (UniqueName: \"kubernetes.io/projected/c6de759f-205a-4bf1-ba92-843e9725d254-kube-api-access-jh4q6\") pod \"nmstate-handler-2xzlk\" (UID: \"c6de759f-205a-4bf1-ba92-843e9725d254\") " pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.100997 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.126544 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztplc\" (UniqueName: \"kubernetes.io/projected/a3a28baa-a522-4cc7-b853-037d96cf0590-kube-api-access-ztplc\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.126707 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.126837 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a3a28baa-a522-4cc7-b853-037d96cf0590-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: E0126 22:53:30.126876 4793 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 26 22:53:30 crc kubenswrapper[4793]: E0126 22:53:30.127075 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert podName:a3a28baa-a522-4cc7-b853-037d96cf0590 nodeName:}" failed. No retries permitted until 2026-01-26 22:53:30.627053837 +0000 UTC m=+825.615825349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-b564g" (UID: "a3a28baa-a522-4cc7-b853-037d96cf0590") : secret "plugin-serving-cert" not found Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.127888 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a3a28baa-a522-4cc7-b853-037d96cf0590-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.152577 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztplc\" (UniqueName: \"kubernetes.io/projected/a3a28baa-a522-4cc7-b853-037d96cf0590-kube-api-access-ztplc\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.159834 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8c49b7cb8-rgrbq"] Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.160367 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.160731 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.166815 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8c49b7cb8-rgrbq"] Jan 26 22:53:30 crc kubenswrapper[4793]: W0126 22:53:30.221777 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6de759f_205a_4bf1_ba92_843e9725d254.slice/crio-1daf989b8253d298d090fd46c8a7852ab6166efc755cdbb3c2f2c3508257bacd WatchSource:0}: Error finding container 1daf989b8253d298d090fd46c8a7852ab6166efc755cdbb3c2f2c3508257bacd: Status 404 returned error can't find the container with id 1daf989b8253d298d090fd46c8a7852ab6166efc755cdbb3c2f2c3508257bacd Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228147 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228229 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-trusted-ca-bundle\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228289 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2qfk\" (UniqueName: \"kubernetes.io/projected/ce2c6343-3f86-4963-8d0e-d44613014ff0-kube-api-access-h2qfk\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228322 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-oauth-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228346 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228392 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-service-ca\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.228420 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-oauth-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.329735 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-oauth-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.329985 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.330062 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-service-ca\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.330111 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-oauth-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.330147 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.330204 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-trusted-ca-bundle\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.330248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2qfk\" (UniqueName: \"kubernetes.io/projected/ce2c6343-3f86-4963-8d0e-d44613014ff0-kube-api-access-h2qfk\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.331062 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.331062 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-oauth-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.331603 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-service-ca\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.332414 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce2c6343-3f86-4963-8d0e-d44613014ff0-trusted-ca-bundle\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.336878 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-serving-cert\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.339263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce2c6343-3f86-4963-8d0e-d44613014ff0-console-oauth-config\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.347313 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2qfk\" (UniqueName: \"kubernetes.io/projected/ce2c6343-3f86-4963-8d0e-d44613014ff0-kube-api-access-h2qfk\") pod \"console-8c49b7cb8-rgrbq\" (UID: \"ce2c6343-3f86-4963-8d0e-d44613014ff0\") " pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.419245 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kgq9g"] Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.431854 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.435565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0399eddc-c943-4f43-8b69-c5b73e14e5c4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9q92v\" (UID: \"0399eddc-c943-4f43-8b69-c5b73e14e5c4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.536673 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.627027 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2xzlk" event={"ID":"c6de759f-205a-4bf1-ba92-843e9725d254","Type":"ContainerStarted","Data":"1daf989b8253d298d090fd46c8a7852ab6166efc755cdbb3c2f2c3508257bacd"} Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.628624 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" event={"ID":"4c9631a4-9e0d-4fef-868a-d705499ebac8","Type":"ContainerStarted","Data":"1ee34a70dc04d39c20de1fa721623f5b12a6ee2d617ea422af2233365e4e22b3"} Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.640001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.645933 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3a28baa-a522-4cc7-b853-037d96cf0590-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b564g\" (UID: \"a3a28baa-a522-4cc7-b853-037d96cf0590\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.722642 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.755123 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8c49b7cb8-rgrbq"] Jan 26 22:53:30 crc kubenswrapper[4793]: W0126 22:53:30.760349 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce2c6343_3f86_4963_8d0e_d44613014ff0.slice/crio-801a584500669f1fb66f84191ddc382b2eca050bd575f78cd6584457cdd670eb WatchSource:0}: Error finding container 801a584500669f1fb66f84191ddc382b2eca050bd575f78cd6584457cdd670eb: Status 404 returned error can't find the container with id 801a584500669f1fb66f84191ddc382b2eca050bd575f78cd6584457cdd670eb Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.859594 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" Jan 26 22:53:30 crc kubenswrapper[4793]: I0126 22:53:30.999608 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v"] Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.099881 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g"] Jan 26 22:53:31 crc kubenswrapper[4793]: W0126 22:53:31.103064 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3a28baa_a522_4cc7_b853_037d96cf0590.slice/crio-83825e57e6a2b7ecbdaae75d5f92c863b9f57dc433d27b9f718800f587f88734 WatchSource:0}: Error finding container 83825e57e6a2b7ecbdaae75d5f92c863b9f57dc433d27b9f718800f587f88734: Status 404 returned error can't find the container with id 83825e57e6a2b7ecbdaae75d5f92c863b9f57dc433d27b9f718800f587f88734 Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.636181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" event={"ID":"a3a28baa-a522-4cc7-b853-037d96cf0590","Type":"ContainerStarted","Data":"83825e57e6a2b7ecbdaae75d5f92c863b9f57dc433d27b9f718800f587f88734"} Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.638279 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" event={"ID":"0399eddc-c943-4f43-8b69-c5b73e14e5c4","Type":"ContainerStarted","Data":"6c1bfead4c8e3be0c1f6e243507eb63e735eadf5a14d0db98c2b4a0ab829d4ec"} Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.640344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8c49b7cb8-rgrbq" event={"ID":"ce2c6343-3f86-4963-8d0e-d44613014ff0","Type":"ContainerStarted","Data":"c56897b068e2623e40e14315069aec77b62ac40e412b277d68595ded8d4c6680"} Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.640408 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8c49b7cb8-rgrbq" event={"ID":"ce2c6343-3f86-4963-8d0e-d44613014ff0","Type":"ContainerStarted","Data":"801a584500669f1fb66f84191ddc382b2eca050bd575f78cd6584457cdd670eb"} Jan 26 22:53:31 crc kubenswrapper[4793]: I0126 22:53:31.662696 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8c49b7cb8-rgrbq" podStartSLOduration=1.662652248 podStartE2EDuration="1.662652248s" podCreationTimestamp="2026-01-26 22:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:53:31.658014696 +0000 UTC m=+826.646786218" watchObservedRunningTime="2026-01-26 22:53:31.662652248 +0000 UTC m=+826.651423780" Jan 26 22:53:32 crc kubenswrapper[4793]: I0126 22:53:32.499405 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lj9ws" Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.655607 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" event={"ID":"4c9631a4-9e0d-4fef-868a-d705499ebac8","Type":"ContainerStarted","Data":"355a3af769cda5995239e241a9ccf77be72572a12a6b96ac4104b88f774c37a9"} Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.657213 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" event={"ID":"0399eddc-c943-4f43-8b69-c5b73e14e5c4","Type":"ContainerStarted","Data":"39d51dc8ffc8e76fbf33b766020f76ecde4ed6970a8edf8cdf7b9c54a86b9f56"} Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.657311 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.658922 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2xzlk" event={"ID":"c6de759f-205a-4bf1-ba92-843e9725d254","Type":"ContainerStarted","Data":"3a24ce080732f68ab10c75ddc527e156bc7169071636f51527869f9c1718a28a"} Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.659256 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.672138 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" podStartSLOduration=2.840035883 podStartE2EDuration="4.672120176s" podCreationTimestamp="2026-01-26 22:53:29 +0000 UTC" firstStartedPulling="2026-01-26 22:53:31.01992064 +0000 UTC m=+826.008692152" lastFinishedPulling="2026-01-26 22:53:32.852004923 +0000 UTC m=+827.840776445" observedRunningTime="2026-01-26 22:53:33.671601042 +0000 UTC m=+828.660372554" watchObservedRunningTime="2026-01-26 22:53:33.672120176 +0000 UTC m=+828.660891688" Jan 26 22:53:33 crc kubenswrapper[4793]: I0126 22:53:33.689435 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2xzlk" podStartSLOduration=2.067779321 podStartE2EDuration="4.689416658s" podCreationTimestamp="2026-01-26 22:53:29 +0000 UTC" firstStartedPulling="2026-01-26 22:53:30.225714373 +0000 UTC m=+825.214485885" lastFinishedPulling="2026-01-26 22:53:32.8473517 +0000 UTC m=+827.836123222" observedRunningTime="2026-01-26 22:53:33.687537385 +0000 UTC m=+828.676308917" watchObservedRunningTime="2026-01-26 22:53:33.689416658 +0000 UTC m=+828.678188170" Jan 26 22:53:34 crc kubenswrapper[4793]: I0126 22:53:34.669749 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" event={"ID":"a3a28baa-a522-4cc7-b853-037d96cf0590","Type":"ContainerStarted","Data":"0617b8b9cb3e21cbfd79f8329c29febaffb4a9cfa345261c746f5bcc60070458"} Jan 26 22:53:34 crc kubenswrapper[4793]: I0126 22:53:34.731484 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b564g" podStartSLOduration=2.954525349 podStartE2EDuration="5.731462883s" podCreationTimestamp="2026-01-26 22:53:29 +0000 UTC" firstStartedPulling="2026-01-26 22:53:31.105493754 +0000 UTC m=+826.094265266" lastFinishedPulling="2026-01-26 22:53:33.882431268 +0000 UTC m=+828.871202800" observedRunningTime="2026-01-26 22:53:34.726847252 +0000 UTC m=+829.715618794" watchObservedRunningTime="2026-01-26 22:53:34.731462883 +0000 UTC m=+829.720234405" Jan 26 22:53:36 crc kubenswrapper[4793]: I0126 22:53:36.688087 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" event={"ID":"4c9631a4-9e0d-4fef-868a-d705499ebac8","Type":"ContainerStarted","Data":"f09532994cfa9e22f4747e8676812dfd151778faa4d2fac4bb9709e2d2c9ce89"} Jan 26 22:53:36 crc kubenswrapper[4793]: I0126 22:53:36.720817 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-kgq9g" podStartSLOduration=2.441510899 podStartE2EDuration="7.720799477s" podCreationTimestamp="2026-01-26 22:53:29 +0000 UTC" firstStartedPulling="2026-01-26 22:53:30.42461473 +0000 UTC m=+825.413386242" lastFinishedPulling="2026-01-26 22:53:35.703903278 +0000 UTC m=+830.692674820" observedRunningTime="2026-01-26 22:53:36.716811934 +0000 UTC m=+831.705583446" watchObservedRunningTime="2026-01-26 22:53:36.720799477 +0000 UTC m=+831.709570989" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.189239 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2xzlk" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.538544 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.538631 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.546608 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.726544 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8c49b7cb8-rgrbq" Jan 26 22:53:40 crc kubenswrapper[4793]: I0126 22:53:40.782257 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:53:46 crc kubenswrapper[4793]: I0126 22:53:46.146339 4793 scope.go:117] "RemoveContainer" containerID="2cb61fdad3703c9db3f70a80af86571cbed8b1dc20e073f4ee149431f71f0298" Jan 26 22:53:46 crc kubenswrapper[4793]: I0126 22:53:46.776472 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l5qgq_2e6daa0d-7641-46e1-b9ab-8479c1cd00d6/kube-multus/2.log" Jan 26 22:53:50 crc kubenswrapper[4793]: I0126 22:53:50.735169 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9q92v" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.332135 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.335154 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.347917 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.459616 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7qbk\" (UniqueName: \"kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.459676 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.460408 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.562979 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.563181 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7qbk\" (UniqueName: \"kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.563272 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.564659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.564796 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.593410 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7qbk\" (UniqueName: \"kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk\") pod \"certified-operators-vlvrv\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.673060 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:53:56 crc kubenswrapper[4793]: I0126 22:53:56.946039 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:53:57 crc kubenswrapper[4793]: I0126 22:53:57.866643 4793 generic.go:334] "Generic (PLEG): container finished" podID="a802d493-70e5-40e0-b991-7d58b6570abd" containerID="9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564" exitCode=0 Jan 26 22:53:57 crc kubenswrapper[4793]: I0126 22:53:57.866764 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerDied","Data":"9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564"} Jan 26 22:53:57 crc kubenswrapper[4793]: I0126 22:53:57.867167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerStarted","Data":"c361b43e23ef627f6653c64ba1b39c20cc81cca76bd27edcc2b9c43119e89341"} Jan 26 22:53:58 crc kubenswrapper[4793]: I0126 22:53:58.875492 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerStarted","Data":"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63"} Jan 26 22:53:59 crc kubenswrapper[4793]: I0126 22:53:59.890015 4793 generic.go:334] "Generic (PLEG): container finished" podID="a802d493-70e5-40e0-b991-7d58b6570abd" containerID="8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63" exitCode=0 Jan 26 22:53:59 crc kubenswrapper[4793]: I0126 22:53:59.890117 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerDied","Data":"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63"} Jan 26 22:54:00 crc kubenswrapper[4793]: I0126 22:54:00.905033 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerStarted","Data":"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c"} Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.318541 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vlvrv" podStartSLOduration=6.881922981 podStartE2EDuration="9.318521415s" podCreationTimestamp="2026-01-26 22:53:56 +0000 UTC" firstStartedPulling="2026-01-26 22:53:57.868737249 +0000 UTC m=+852.857508801" lastFinishedPulling="2026-01-26 22:54:00.305335713 +0000 UTC m=+855.294107235" observedRunningTime="2026-01-26 22:54:00.958611512 +0000 UTC m=+855.947383054" watchObservedRunningTime="2026-01-26 22:54:05.318521415 +0000 UTC m=+860.307292927" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.323446 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs"] Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.324472 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.333800 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.335111 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs"] Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.394176 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.394293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbgh\" (UniqueName: \"kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.394358 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.495146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbgh\" (UniqueName: \"kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.495243 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.495330 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.495910 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.495966 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.513029 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbgh\" (UniqueName: \"kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.644617 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.832891 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-57ccj" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerName="console" containerID="cri-o://aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa" gracePeriod=15 Jan 26 22:54:05 crc kubenswrapper[4793]: I0126 22:54:05.910904 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs"] Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.168726 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-57ccj_f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd/console/0.log" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.169106 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313582 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313623 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313657 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313719 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313766 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.313828 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqq8q\" (UniqueName: \"kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q\") pod \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\" (UID: \"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd\") " Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.314670 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.314697 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config" (OuterVolumeSpecName: "console-config") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.315078 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.315309 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca" (OuterVolumeSpecName: "service-ca") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.320286 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q" (OuterVolumeSpecName: "kube-api-access-mqq8q") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "kube-api-access-mqq8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.320932 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.322141 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" (UID: "f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415281 4793 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415326 4793 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415340 4793 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415352 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqq8q\" (UniqueName: \"kubernetes.io/projected/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-kube-api-access-mqq8q\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415366 4793 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415381 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.415391 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.673936 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.674027 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.736899 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.950149 4793 generic.go:334] "Generic (PLEG): container finished" podID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerID="64d5cb9763014f7de2894fd7c4bbd529608ff547250631d88174ddee8e75766c" exitCode=0 Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.950249 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" event={"ID":"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a","Type":"ContainerDied","Data":"64d5cb9763014f7de2894fd7c4bbd529608ff547250631d88174ddee8e75766c"} Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.950332 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" event={"ID":"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a","Type":"ContainerStarted","Data":"b33e7135b1ab732a93c47c2e2988ec9f52bae84e675d192f79df1c3e2d70873e"} Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.954521 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-57ccj_f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd/console/0.log" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.955099 4793 generic.go:334] "Generic (PLEG): container finished" podID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerID="aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa" exitCode=2 Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.955250 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57ccj" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.955321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57ccj" event={"ID":"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd","Type":"ContainerDied","Data":"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa"} Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.955369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57ccj" event={"ID":"f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd","Type":"ContainerDied","Data":"fc7e45cc09b7e136cabcbaac23e466557070638bb704168bde4eed70e8068d9f"} Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.955407 4793 scope.go:117] "RemoveContainer" containerID="aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.988490 4793 scope.go:117] "RemoveContainer" containerID="aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa" Jan 26 22:54:06 crc kubenswrapper[4793]: E0126 22:54:06.989300 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa\": container with ID starting with aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa not found: ID does not exist" containerID="aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa" Jan 26 22:54:06 crc kubenswrapper[4793]: I0126 22:54:06.989375 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa"} err="failed to get container status \"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa\": rpc error: code = NotFound desc = could not find container \"aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa\": container with ID starting with aa301cd8d1722c60878c022ad6263d6186c3c83b896ba252032186296d527cfa not found: ID does not exist" Jan 26 22:54:07 crc kubenswrapper[4793]: I0126 22:54:07.011928 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:54:07 crc kubenswrapper[4793]: I0126 22:54:07.020505 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-57ccj"] Jan 26 22:54:07 crc kubenswrapper[4793]: I0126 22:54:07.027702 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:07 crc kubenswrapper[4793]: I0126 22:54:07.774094 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" path="/var/lib/kubelet/pods/f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd/volumes" Jan 26 22:54:08 crc kubenswrapper[4793]: I0126 22:54:08.984517 4793 generic.go:334] "Generic (PLEG): container finished" podID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerID="b9d70838e90a86618cb5150e3376b7facdb4180322aa778b9e6d6d68b22887e4" exitCode=0 Jan 26 22:54:08 crc kubenswrapper[4793]: I0126 22:54:08.985088 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" event={"ID":"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a","Type":"ContainerDied","Data":"b9d70838e90a86618cb5150e3376b7facdb4180322aa778b9e6d6d68b22887e4"} Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.484119 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.484622 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vlvrv" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="registry-server" containerID="cri-o://7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c" gracePeriod=2 Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.912135 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.992946 4793 generic.go:334] "Generic (PLEG): container finished" podID="a802d493-70e5-40e0-b991-7d58b6570abd" containerID="7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c" exitCode=0 Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.993006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerDied","Data":"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c"} Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.993099 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlvrv" event={"ID":"a802d493-70e5-40e0-b991-7d58b6570abd","Type":"ContainerDied","Data":"c361b43e23ef627f6653c64ba1b39c20cc81cca76bd27edcc2b9c43119e89341"} Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.993105 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlvrv" Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.993126 4793 scope.go:117] "RemoveContainer" containerID="7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c" Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.995989 4793 generic.go:334] "Generic (PLEG): container finished" podID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerID="cffb1ccca7d2b8c24cc905c579d315558094aaf65352d11411c082529bed8a6d" exitCode=0 Jan 26 22:54:09 crc kubenswrapper[4793]: I0126 22:54:09.996053 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" event={"ID":"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a","Type":"ContainerDied","Data":"cffb1ccca7d2b8c24cc905c579d315558094aaf65352d11411c082529bed8a6d"} Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.019310 4793 scope.go:117] "RemoveContainer" containerID="8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.037962 4793 scope.go:117] "RemoveContainer" containerID="9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.051697 4793 scope.go:117] "RemoveContainer" containerID="7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c" Jan 26 22:54:10 crc kubenswrapper[4793]: E0126 22:54:10.052755 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c\": container with ID starting with 7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c not found: ID does not exist" containerID="7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.052835 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c"} err="failed to get container status \"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c\": rpc error: code = NotFound desc = could not find container \"7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c\": container with ID starting with 7ee6ea75fb72387e61b66e984ff78efb45e81d3169cc560a2aee02a542cb234c not found: ID does not exist" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.052900 4793 scope.go:117] "RemoveContainer" containerID="8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63" Jan 26 22:54:10 crc kubenswrapper[4793]: E0126 22:54:10.053363 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63\": container with ID starting with 8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63 not found: ID does not exist" containerID="8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.053406 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63"} err="failed to get container status \"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63\": rpc error: code = NotFound desc = could not find container \"8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63\": container with ID starting with 8cea5eb1264922a0e7e44a9cfa200489b716fb496f8e3e5a1bdeba79eb3a6e63 not found: ID does not exist" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.053427 4793 scope.go:117] "RemoveContainer" containerID="9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564" Jan 26 22:54:10 crc kubenswrapper[4793]: E0126 22:54:10.053999 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564\": container with ID starting with 9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564 not found: ID does not exist" containerID="9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.054071 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564"} err="failed to get container status \"9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564\": rpc error: code = NotFound desc = could not find container \"9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564\": container with ID starting with 9c2709d3c4485752d6954a3c7738058bb28d8f789f8b4ad23a0dc8164d873564 not found: ID does not exist" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.072669 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content\") pod \"a802d493-70e5-40e0-b991-7d58b6570abd\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.072965 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities\") pod \"a802d493-70e5-40e0-b991-7d58b6570abd\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.073055 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7qbk\" (UniqueName: \"kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk\") pod \"a802d493-70e5-40e0-b991-7d58b6570abd\" (UID: \"a802d493-70e5-40e0-b991-7d58b6570abd\") " Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.073989 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities" (OuterVolumeSpecName: "utilities") pod "a802d493-70e5-40e0-b991-7d58b6570abd" (UID: "a802d493-70e5-40e0-b991-7d58b6570abd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.079172 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk" (OuterVolumeSpecName: "kube-api-access-r7qbk") pod "a802d493-70e5-40e0-b991-7d58b6570abd" (UID: "a802d493-70e5-40e0-b991-7d58b6570abd"). InnerVolumeSpecName "kube-api-access-r7qbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.115365 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a802d493-70e5-40e0-b991-7d58b6570abd" (UID: "a802d493-70e5-40e0-b991-7d58b6570abd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.175273 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.175334 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7qbk\" (UniqueName: \"kubernetes.io/projected/a802d493-70e5-40e0-b991-7d58b6570abd-kube-api-access-r7qbk\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.175357 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a802d493-70e5-40e0-b991-7d58b6570abd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.349093 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:54:10 crc kubenswrapper[4793]: I0126 22:54:10.353774 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vlvrv"] Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.251143 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.392274 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle\") pod \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.392886 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtbgh\" (UniqueName: \"kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh\") pod \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.392946 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util\") pod \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\" (UID: \"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a\") " Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.393713 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle" (OuterVolumeSpecName: "bundle") pod "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" (UID: "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.399862 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh" (OuterVolumeSpecName: "kube-api-access-qtbgh") pod "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" (UID: "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a"). InnerVolumeSpecName "kube-api-access-qtbgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.494713 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.494763 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtbgh\" (UniqueName: \"kubernetes.io/projected/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-kube-api-access-qtbgh\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.619691 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util" (OuterVolumeSpecName: "util") pod "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" (UID: "c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.697125 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a-util\") on node \"crc\" DevicePath \"\"" Jan 26 22:54:11 crc kubenswrapper[4793]: I0126 22:54:11.772037 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" path="/var/lib/kubelet/pods/a802d493-70e5-40e0-b991-7d58b6570abd/volumes" Jan 26 22:54:12 crc kubenswrapper[4793]: I0126 22:54:12.015470 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" event={"ID":"c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a","Type":"ContainerDied","Data":"b33e7135b1ab732a93c47c2e2988ec9f52bae84e675d192f79df1c3e2d70873e"} Jan 26 22:54:12 crc kubenswrapper[4793]: I0126 22:54:12.015537 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b33e7135b1ab732a93c47c2e2988ec9f52bae84e675d192f79df1c3e2d70873e" Jan 26 22:54:12 crc kubenswrapper[4793]: I0126 22:54:12.015589 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcst8zs" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.011351 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx"] Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012246 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerName="console" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012261 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerName="console" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012277 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="extract-utilities" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012284 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="extract-utilities" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012295 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="pull" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012301 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="pull" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012308 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="util" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012314 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="util" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012321 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="extract-content" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012327 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="extract-content" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012336 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="extract" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012342 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="extract" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.012350 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="registry-server" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012356 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="registry-server" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012456 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c5514a-ff18-4110-88dd-4d7aeeeaf7bd" containerName="console" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012466 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a802d493-70e5-40e0-b991-7d58b6570abd" containerName="registry-server" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012475 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0af268e-2c6a-4a7c-a6b3-ee6c6fbf087a" containerName="extract" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.012863 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: W0126 22:54:24.016671 4793 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-cert": failed to list *v1.Secret: secrets "metallb-operator-webhook-server-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.016722 4793 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-webhook-server-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 22:54:24 crc kubenswrapper[4793]: W0126 22:54:24.017216 4793 reflector.go:561] object-"metallb-system"/"manager-account-dockercfg-z6pgj": failed to list *v1.Secret: secrets "manager-account-dockercfg-z6pgj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.017267 4793 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"manager-account-dockercfg-z6pgj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"manager-account-dockercfg-z6pgj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 22:54:24 crc kubenswrapper[4793]: W0126 22:54:24.017210 4793 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: secrets "metallb-operator-controller-manager-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 22:54:24 crc kubenswrapper[4793]: W0126 22:54:24.017274 4793 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.017329 4793 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 22:54:24 crc kubenswrapper[4793]: E0126 22:54:24.017293 4793 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-controller-manager-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.024846 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.038780 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx"] Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.142606 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg"] Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.143386 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.145066 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.145086 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.145937 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vzpmx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.162473 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg"] Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.199366 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-apiservice-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.199421 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwcg4\" (UniqueName: \"kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.199665 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-webhook-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301098 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzz58\" (UniqueName: \"kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301166 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-webhook-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301211 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-webhook-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301235 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-apiservice-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301253 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwcg4\" (UniqueName: \"kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.301308 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-apiservice-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.402142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-apiservice-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.402228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzz58\" (UniqueName: \"kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.402279 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-webhook-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.409331 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-apiservice-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.409921 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c33d800-1c21-459c-b488-c4d43f46147a-webhook-cert\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:24 crc kubenswrapper[4793]: I0126 22:54:24.960089 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.101082 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.117896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-apiservice-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.118090 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98b1ff78-307c-4f20-8d26-021aac793b08-webhook-cert\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.320557 4793 projected.go:288] Couldn't get configMap metallb-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.320623 4793 projected.go:194] Error preparing data for projected volume kube-api-access-hwcg4 for pod metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx: failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.320688 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4 podName:98b1ff78-307c-4f20-8d26-021aac793b08 nodeName:}" failed. No retries permitted until 2026-01-26 22:54:25.820664641 +0000 UTC m=+880.809436153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hwcg4" (UniqueName: "kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4") pod "metallb-operator-controller-manager-6947468bd9-d2zfx" (UID: "98b1ff78-307c-4f20-8d26-021aac793b08") : failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.401341 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-z6pgj" Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.420381 4793 projected.go:288] Couldn't get configMap metallb-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.420485 4793 projected.go:194] Error preparing data for projected volume kube-api-access-jzz58 for pod metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg: failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: E0126 22:54:25.420575 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58 podName:4c33d800-1c21-459c-b488-c4d43f46147a nodeName:}" failed. No retries permitted until 2026-01-26 22:54:25.920539762 +0000 UTC m=+880.909311274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jzz58" (UniqueName: "kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58") pod "metallb-operator-webhook-server-576dcf7dd-dz2zg" (UID: "4c33d800-1c21-459c-b488-c4d43f46147a") : failed to sync configmap cache: timed out waiting for the condition Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.596804 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.821436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwcg4\" (UniqueName: \"kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.831679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwcg4\" (UniqueName: \"kubernetes.io/projected/98b1ff78-307c-4f20-8d26-021aac793b08-kube-api-access-hwcg4\") pod \"metallb-operator-controller-manager-6947468bd9-d2zfx\" (UID: \"98b1ff78-307c-4f20-8d26-021aac793b08\") " pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.922948 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzz58\" (UniqueName: \"kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.927862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzz58\" (UniqueName: \"kubernetes.io/projected/4c33d800-1c21-459c-b488-c4d43f46147a-kube-api-access-jzz58\") pod \"metallb-operator-webhook-server-576dcf7dd-dz2zg\" (UID: \"4c33d800-1c21-459c-b488-c4d43f46147a\") " pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:25 crc kubenswrapper[4793]: I0126 22:54:25.955085 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:26 crc kubenswrapper[4793]: I0126 22:54:26.127758 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:26 crc kubenswrapper[4793]: I0126 22:54:26.206362 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg"] Jan 26 22:54:26 crc kubenswrapper[4793]: I0126 22:54:26.343004 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx"] Jan 26 22:54:26 crc kubenswrapper[4793]: W0126 22:54:26.352587 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98b1ff78_307c_4f20_8d26_021aac793b08.slice/crio-23fa57770728960d38579663fdc509bb14b6b087e2967087a649a2782fb32c2b WatchSource:0}: Error finding container 23fa57770728960d38579663fdc509bb14b6b087e2967087a649a2782fb32c2b: Status 404 returned error can't find the container with id 23fa57770728960d38579663fdc509bb14b6b087e2967087a649a2782fb32c2b Jan 26 22:54:27 crc kubenswrapper[4793]: I0126 22:54:27.128081 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" event={"ID":"4c33d800-1c21-459c-b488-c4d43f46147a","Type":"ContainerStarted","Data":"c7a641f856d559af17a3246f0649213b4fd07a6172255821b6f3d13486bf0da4"} Jan 26 22:54:27 crc kubenswrapper[4793]: I0126 22:54:27.129115 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" event={"ID":"98b1ff78-307c-4f20-8d26-021aac793b08","Type":"ContainerStarted","Data":"23fa57770728960d38579663fdc509bb14b6b087e2967087a649a2782fb32c2b"} Jan 26 22:54:31 crc kubenswrapper[4793]: I0126 22:54:31.162847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" event={"ID":"98b1ff78-307c-4f20-8d26-021aac793b08","Type":"ContainerStarted","Data":"a14d3d26c2c79e3b15e2049f747606d0aa871fb9cb4c3e1901b17cea9b88abb8"} Jan 26 22:54:31 crc kubenswrapper[4793]: I0126 22:54:31.163669 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:54:31 crc kubenswrapper[4793]: I0126 22:54:31.184277 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" podStartSLOduration=4.422327552 podStartE2EDuration="8.184259819s" podCreationTimestamp="2026-01-26 22:54:23 +0000 UTC" firstStartedPulling="2026-01-26 22:54:26.355577584 +0000 UTC m=+881.344349096" lastFinishedPulling="2026-01-26 22:54:30.117509841 +0000 UTC m=+885.106281363" observedRunningTime="2026-01-26 22:54:31.18045131 +0000 UTC m=+886.169222822" watchObservedRunningTime="2026-01-26 22:54:31.184259819 +0000 UTC m=+886.173031331" Jan 26 22:54:34 crc kubenswrapper[4793]: I0126 22:54:34.184064 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" event={"ID":"4c33d800-1c21-459c-b488-c4d43f46147a","Type":"ContainerStarted","Data":"511add134364b50e2d89b5e328e41d302d115dbf33c09d14503e1db153c665c7"} Jan 26 22:54:34 crc kubenswrapper[4793]: I0126 22:54:34.184911 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:54:34 crc kubenswrapper[4793]: I0126 22:54:34.210405 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" podStartSLOduration=3.155309138 podStartE2EDuration="10.210378079s" podCreationTimestamp="2026-01-26 22:54:24 +0000 UTC" firstStartedPulling="2026-01-26 22:54:26.227466901 +0000 UTC m=+881.216238423" lastFinishedPulling="2026-01-26 22:54:33.282535852 +0000 UTC m=+888.271307364" observedRunningTime="2026-01-26 22:54:34.206752616 +0000 UTC m=+889.195524178" watchObservedRunningTime="2026-01-26 22:54:34.210378079 +0000 UTC m=+889.199149621" Jan 26 22:54:45 crc kubenswrapper[4793]: I0126 22:54:45.961029 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-576dcf7dd-dz2zg" Jan 26 22:55:06 crc kubenswrapper[4793]: I0126 22:55:06.134863 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6947468bd9-d2zfx" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.059548 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-66sdc"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.064133 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.065940 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.066304 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.068358 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.069267 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.071128 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.079541 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tm4ps" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.089757 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.142800 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-chq2z"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.143763 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.145744 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.146110 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7qgrl" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.146431 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.146968 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.154758 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-hbsm9"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.155705 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.161870 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.167106 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hbsm9"] Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229412 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-sockets\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229487 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229527 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-reloader\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229562 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics-certs\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229583 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57471441-2abf-4120-8a8f-e7dffa12be10-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229648 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6tdf\" (UniqueName: \"kubernetes.io/projected/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-kube-api-access-x6tdf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229669 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjx5\" (UniqueName: \"kubernetes.io/projected/57471441-2abf-4120-8a8f-e7dffa12be10-kube-api-access-prjx5\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229761 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-startup\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.229968 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-conf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.330864 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-metrics-certs\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.330915 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.330951 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics-certs\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.330973 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57471441-2abf-4120-8a8f-e7dffa12be10-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6tdf\" (UniqueName: \"kubernetes.io/projected/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-kube-api-access-x6tdf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prjx5\" (UniqueName: \"kubernetes.io/projected/57471441-2abf-4120-8a8f-e7dffa12be10-kube-api-access-prjx5\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331257 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-startup\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331331 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-conf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331385 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-metrics-certs\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331493 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xzt\" (UniqueName: \"kubernetes.io/projected/3c375207-c4cf-4638-af29-710918c39e54-kube-api-access-x9xzt\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-sockets\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331644 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-cert\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331730 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3c375207-c4cf-4638-af29-710918c39e54-metallb-excludel2\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331805 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wpft\" (UniqueName: \"kubernetes.io/projected/9b9487b8-aed6-466b-a801-8f826dc81687-kube-api-access-8wpft\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.331857 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-reloader\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.332721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-reloader\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.333407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-conf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.333468 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-sockets\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.333556 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.334349 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-frr-startup\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.338926 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-metrics-certs\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.349358 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57471441-2abf-4120-8a8f-e7dffa12be10-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.354943 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prjx5\" (UniqueName: \"kubernetes.io/projected/57471441-2abf-4120-8a8f-e7dffa12be10-kube-api-access-prjx5\") pod \"frr-k8s-webhook-server-7df86c4f6c-z6qmr\" (UID: \"57471441-2abf-4120-8a8f-e7dffa12be10\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.356852 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6tdf\" (UniqueName: \"kubernetes.io/projected/24ee4fb9-196a-45d2-a9b6-d446b5d92dd8-kube-api-access-x6tdf\") pod \"frr-k8s-66sdc\" (UID: \"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8\") " pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.382935 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.394985 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.432849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-metrics-certs\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.432902 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xzt\" (UniqueName: \"kubernetes.io/projected/3c375207-c4cf-4638-af29-710918c39e54-kube-api-access-x9xzt\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.432939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-cert\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.432968 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3c375207-c4cf-4638-af29-710918c39e54-metallb-excludel2\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.432989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wpft\" (UniqueName: \"kubernetes.io/projected/9b9487b8-aed6-466b-a801-8f826dc81687-kube-api-access-8wpft\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.433020 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-metrics-certs\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.433037 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: E0126 22:55:07.433160 4793 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 22:55:07 crc kubenswrapper[4793]: E0126 22:55:07.433242 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist podName:3c375207-c4cf-4638-af29-710918c39e54 nodeName:}" failed. No retries permitted until 2026-01-26 22:55:07.933223349 +0000 UTC m=+922.921994861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist") pod "speaker-chq2z" (UID: "3c375207-c4cf-4638-af29-710918c39e54") : secret "metallb-memberlist" not found Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.434351 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3c375207-c4cf-4638-af29-710918c39e54-metallb-excludel2\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.439884 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.441180 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-metrics-certs\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.442583 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-metrics-certs\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.450806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xzt\" (UniqueName: \"kubernetes.io/projected/3c375207-c4cf-4638-af29-710918c39e54-kube-api-access-x9xzt\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.451868 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b9487b8-aed6-466b-a801-8f826dc81687-cert\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.462290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wpft\" (UniqueName: \"kubernetes.io/projected/9b9487b8-aed6-466b-a801-8f826dc81687-kube-api-access-8wpft\") pod \"controller-6968d8fdc4-hbsm9\" (UID: \"9b9487b8-aed6-466b-a801-8f826dc81687\") " pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.471467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.797785 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hbsm9"] Jan 26 22:55:07 crc kubenswrapper[4793]: W0126 22:55:07.807688 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b9487b8_aed6_466b_a801_8f826dc81687.slice/crio-f7c43c7c720f874b1bd247d8ab56c7c5b534cb9b06fe57cfd8fdc027026491f1 WatchSource:0}: Error finding container f7c43c7c720f874b1bd247d8ab56c7c5b534cb9b06fe57cfd8fdc027026491f1: Status 404 returned error can't find the container with id f7c43c7c720f874b1bd247d8ab56c7c5b534cb9b06fe57cfd8fdc027026491f1 Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.897770 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr"] Jan 26 22:55:07 crc kubenswrapper[4793]: W0126 22:55:07.902325 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57471441_2abf_4120_8a8f_e7dffa12be10.slice/crio-a087304cbe5e354364897ad0e0606d39348dd597f5ba32d9d02094984e4af571 WatchSource:0}: Error finding container a087304cbe5e354364897ad0e0606d39348dd597f5ba32d9d02094984e4af571: Status 404 returned error can't find the container with id a087304cbe5e354364897ad0e0606d39348dd597f5ba32d9d02094984e4af571 Jan 26 22:55:07 crc kubenswrapper[4793]: I0126 22:55:07.942984 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:07 crc kubenswrapper[4793]: E0126 22:55:07.943207 4793 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 22:55:07 crc kubenswrapper[4793]: E0126 22:55:07.943288 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist podName:3c375207-c4cf-4638-af29-710918c39e54 nodeName:}" failed. No retries permitted until 2026-01-26 22:55:08.943265791 +0000 UTC m=+923.932037313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist") pod "speaker-chq2z" (UID: "3c375207-c4cf-4638-af29-710918c39e54") : secret "metallb-memberlist" not found Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.430612 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" event={"ID":"57471441-2abf-4120-8a8f-e7dffa12be10","Type":"ContainerStarted","Data":"a087304cbe5e354364897ad0e0606d39348dd597f5ba32d9d02094984e4af571"} Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.433181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"c203c34fb97c8caea2a74dc05bc64f34665a3068b3f84d722f8a073712363c58"} Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.436452 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hbsm9" event={"ID":"9b9487b8-aed6-466b-a801-8f826dc81687","Type":"ContainerStarted","Data":"f8d969089626ac4c9e032d86c0ea2a00fa5a16f4dbe7f72874e8c4b89c886e63"} Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.436517 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hbsm9" event={"ID":"9b9487b8-aed6-466b-a801-8f826dc81687","Type":"ContainerStarted","Data":"f354d7690c9d354b7e9123181963f5cf1fb1be6800dbc95336d8262f7a03e466"} Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.436545 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hbsm9" event={"ID":"9b9487b8-aed6-466b-a801-8f826dc81687","Type":"ContainerStarted","Data":"f7c43c7c720f874b1bd247d8ab56c7c5b534cb9b06fe57cfd8fdc027026491f1"} Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.436663 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.452376 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-hbsm9" podStartSLOduration=1.4523536369999999 podStartE2EDuration="1.452353637s" podCreationTimestamp="2026-01-26 22:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:55:08.451615716 +0000 UTC m=+923.440387238" watchObservedRunningTime="2026-01-26 22:55:08.452353637 +0000 UTC m=+923.441125159" Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.956014 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:08 crc kubenswrapper[4793]: I0126 22:55:08.962581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3c375207-c4cf-4638-af29-710918c39e54-memberlist\") pod \"speaker-chq2z\" (UID: \"3c375207-c4cf-4638-af29-710918c39e54\") " pod="metallb-system/speaker-chq2z" Jan 26 22:55:09 crc kubenswrapper[4793]: I0126 22:55:09.258165 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-chq2z" Jan 26 22:55:09 crc kubenswrapper[4793]: W0126 22:55:09.313480 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c375207_c4cf_4638_af29_710918c39e54.slice/crio-2500b9e9a7f56fcebfb1f2e7348cea4f1f727ae7e6dbd06d7752d2ffa132e22b WatchSource:0}: Error finding container 2500b9e9a7f56fcebfb1f2e7348cea4f1f727ae7e6dbd06d7752d2ffa132e22b: Status 404 returned error can't find the container with id 2500b9e9a7f56fcebfb1f2e7348cea4f1f727ae7e6dbd06d7752d2ffa132e22b Jan 26 22:55:09 crc kubenswrapper[4793]: I0126 22:55:09.477825 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-chq2z" event={"ID":"3c375207-c4cf-4638-af29-710918c39e54","Type":"ContainerStarted","Data":"2500b9e9a7f56fcebfb1f2e7348cea4f1f727ae7e6dbd06d7752d2ffa132e22b"} Jan 26 22:55:10 crc kubenswrapper[4793]: I0126 22:55:10.486600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-chq2z" event={"ID":"3c375207-c4cf-4638-af29-710918c39e54","Type":"ContainerStarted","Data":"dd8fccd79f9b1e0f325bb5a61c902b34406cc6d0f42bf1a375b975aa30a561ab"} Jan 26 22:55:10 crc kubenswrapper[4793]: I0126 22:55:10.487104 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-chq2z" Jan 26 22:55:10 crc kubenswrapper[4793]: I0126 22:55:10.487117 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-chq2z" event={"ID":"3c375207-c4cf-4638-af29-710918c39e54","Type":"ContainerStarted","Data":"81e415b9c188e6844c1c28642eb3acaaca04c194f760ca755eb63e4f7d7463bf"} Jan 26 22:55:10 crc kubenswrapper[4793]: I0126 22:55:10.509518 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-chq2z" podStartSLOduration=3.509493407 podStartE2EDuration="3.509493407s" podCreationTimestamp="2026-01-26 22:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:55:10.504579378 +0000 UTC m=+925.493350890" watchObservedRunningTime="2026-01-26 22:55:10.509493407 +0000 UTC m=+925.498264919" Jan 26 22:55:17 crc kubenswrapper[4793]: I0126 22:55:17.475250 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-hbsm9" Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.323245 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.323778 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.535166 4793 generic.go:334] "Generic (PLEG): container finished" podID="24ee4fb9-196a-45d2-a9b6-d446b5d92dd8" containerID="30347001381e6f4d0ce19e9e3028904a7c222b19b777f9df39a5f9e2e0ee6bc3" exitCode=0 Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.535316 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerDied","Data":"30347001381e6f4d0ce19e9e3028904a7c222b19b777f9df39a5f9e2e0ee6bc3"} Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.536981 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" event={"ID":"57471441-2abf-4120-8a8f-e7dffa12be10","Type":"ContainerStarted","Data":"01dc9d52f6525f0367a67286e9fb1910511592b14dff48a1e7dec9573458e1f9"} Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.537171 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:18 crc kubenswrapper[4793]: I0126 22:55:18.617777 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" podStartSLOduration=1.940486342 podStartE2EDuration="11.617747098s" podCreationTimestamp="2026-01-26 22:55:07 +0000 UTC" firstStartedPulling="2026-01-26 22:55:07.905330601 +0000 UTC m=+922.894102113" lastFinishedPulling="2026-01-26 22:55:17.582591357 +0000 UTC m=+932.571362869" observedRunningTime="2026-01-26 22:55:18.611015448 +0000 UTC m=+933.599786970" watchObservedRunningTime="2026-01-26 22:55:18.617747098 +0000 UTC m=+933.606518620" Jan 26 22:55:19 crc kubenswrapper[4793]: I0126 22:55:19.263029 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-chq2z" Jan 26 22:55:19 crc kubenswrapper[4793]: I0126 22:55:19.545436 4793 generic.go:334] "Generic (PLEG): container finished" podID="24ee4fb9-196a-45d2-a9b6-d446b5d92dd8" containerID="658052373ef6c202aa458c2b673c10303a40492d4046364e497ca2660f07ac44" exitCode=0 Jan 26 22:55:19 crc kubenswrapper[4793]: I0126 22:55:19.545525 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerDied","Data":"658052373ef6c202aa458c2b673c10303a40492d4046364e497ca2660f07ac44"} Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.564061 4793 generic.go:334] "Generic (PLEG): container finished" podID="24ee4fb9-196a-45d2-a9b6-d446b5d92dd8" containerID="6615e117ae9fb89bec524a5e42dbcc955c0f5fd4e61e3b8f857a1423d7f43179" exitCode=0 Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.564283 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerDied","Data":"6615e117ae9fb89bec524a5e42dbcc955c0f5fd4e61e3b8f857a1423d7f43179"} Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.790581 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8"] Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.796468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.799134 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8"] Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.860868 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.966893 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.966947 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:20 crc kubenswrapper[4793]: I0126 22:55:20.966977 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f4rk\" (UniqueName: \"kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.067971 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.068047 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f4rk\" (UniqueName: \"kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.068139 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.068625 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.068640 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.090164 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f4rk\" (UniqueName: \"kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.188483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.450893 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8"] Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.573895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" event={"ID":"be420288-3c8c-46cd-8058-b091fc8a7eec","Type":"ContainerStarted","Data":"087f60b89e9c3bf70f0db40bd188ccc41371960bd28b5be9bc6386af1bd78af4"} Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.578383 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"cf512aba77433d1ce19bc40a93d0513776d7cde9b6768a99265f18a702fa64fc"} Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.578403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"91c5f343a37d9143145167ec75f34335e9dc5af94a83acc6fe25585a718e0a25"} Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.578413 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"dd18a37f7f0065635cf9a3e4a4a2bf7b544cdf78daa2b25dff2f32412b98ba74"} Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.578421 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"5c516c417742fa5e3c802567021e583ceab1bdf2fe2397620381c370731deaa5"} Jan 26 22:55:21 crc kubenswrapper[4793]: I0126 22:55:21.578428 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"0ef7ca1796042d463cd64d5bfbe2b5b3f4dd2ce937fec9c53b4ace7f60bf5089"} Jan 26 22:55:22 crc kubenswrapper[4793]: I0126 22:55:22.590635 4793 generic.go:334] "Generic (PLEG): container finished" podID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerID="4e2248746c05716dd7e7cc4274ca2edfd95c254887c340f14588c7df186c2edf" exitCode=0 Jan 26 22:55:22 crc kubenswrapper[4793]: I0126 22:55:22.590754 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" event={"ID":"be420288-3c8c-46cd-8058-b091fc8a7eec","Type":"ContainerDied","Data":"4e2248746c05716dd7e7cc4274ca2edfd95c254887c340f14588c7df186c2edf"} Jan 26 22:55:22 crc kubenswrapper[4793]: I0126 22:55:22.601266 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-66sdc" event={"ID":"24ee4fb9-196a-45d2-a9b6-d446b5d92dd8","Type":"ContainerStarted","Data":"41a5e52579fbf9b4fcb0318abba4aed9b0c5bd39ba97f909229531cacb3922b3"} Jan 26 22:55:22 crc kubenswrapper[4793]: I0126 22:55:22.601692 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:22 crc kubenswrapper[4793]: I0126 22:55:22.658527 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-66sdc" podStartSLOduration=5.656503413 podStartE2EDuration="15.658497929s" podCreationTimestamp="2026-01-26 22:55:07 +0000 UTC" firstStartedPulling="2026-01-26 22:55:07.589843452 +0000 UTC m=+922.578614964" lastFinishedPulling="2026-01-26 22:55:17.591837968 +0000 UTC m=+932.580609480" observedRunningTime="2026-01-26 22:55:22.657745208 +0000 UTC m=+937.646516720" watchObservedRunningTime="2026-01-26 22:55:22.658497929 +0000 UTC m=+937.647269441" Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.889967 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.893718 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.911484 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.936069 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.936175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmrp\" (UniqueName: \"kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:24 crc kubenswrapper[4793]: I0126 22:55:24.936258 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.038001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.038065 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.038118 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvmrp\" (UniqueName: \"kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.039584 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.039646 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.072041 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvmrp\" (UniqueName: \"kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp\") pod \"community-operators-k7chh\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:25 crc kubenswrapper[4793]: I0126 22:55:25.232337 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.232558 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.645031 4793 generic.go:334] "Generic (PLEG): container finished" podID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerID="f180818f9da9a3ea1fa913a362222c8a8d7909c3db4aab4a331adb7d66ccf614" exitCode=0 Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.645113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerDied","Data":"f180818f9da9a3ea1fa913a362222c8a8d7909c3db4aab4a331adb7d66ccf614"} Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.645470 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerStarted","Data":"89b8de0329ef0472c524d45732bc1f1e1b3e7868092ebdc86e0346a811451edc"} Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.650009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" event={"ID":"be420288-3c8c-46cd-8058-b091fc8a7eec","Type":"ContainerDied","Data":"3e77a6cea9fe8ca317e94455a6257644cb6b9204630b3f994cc667f96c47bc19"} Jan 26 22:55:26 crc kubenswrapper[4793]: I0126 22:55:26.649952 4793 generic.go:334] "Generic (PLEG): container finished" podID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerID="3e77a6cea9fe8ca317e94455a6257644cb6b9204630b3f994cc667f96c47bc19" exitCode=0 Jan 26 22:55:27 crc kubenswrapper[4793]: I0126 22:55:27.384094 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:27 crc kubenswrapper[4793]: I0126 22:55:27.403397 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z6qmr" Jan 26 22:55:27 crc kubenswrapper[4793]: I0126 22:55:27.456659 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:27 crc kubenswrapper[4793]: I0126 22:55:27.657828 4793 generic.go:334] "Generic (PLEG): container finished" podID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerID="7b570c88f8123b212e277e16f5633cc71221bbbf5f2e843301d4ffb90ae96723" exitCode=0 Jan 26 22:55:27 crc kubenswrapper[4793]: I0126 22:55:27.657940 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" event={"ID":"be420288-3c8c-46cd-8058-b091fc8a7eec","Type":"ContainerDied","Data":"7b570c88f8123b212e277e16f5633cc71221bbbf5f2e843301d4ffb90ae96723"} Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.668645 4793 generic.go:334] "Generic (PLEG): container finished" podID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerID="c71a16f468753e6b6592c9c3c29cfd096037e2a3a1698a31b7f14e70bfe4f910" exitCode=0 Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.668765 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerDied","Data":"c71a16f468753e6b6592c9c3c29cfd096037e2a3a1698a31b7f14e70bfe4f910"} Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.978727 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.999874 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util\") pod \"be420288-3c8c-46cd-8058-b091fc8a7eec\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.999970 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f4rk\" (UniqueName: \"kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk\") pod \"be420288-3c8c-46cd-8058-b091fc8a7eec\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:28.999994 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle\") pod \"be420288-3c8c-46cd-8058-b091fc8a7eec\" (UID: \"be420288-3c8c-46cd-8058-b091fc8a7eec\") " Jan 26 22:55:28 crc kubenswrapper[4793]: I0126 22:55:29.000827 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle" (OuterVolumeSpecName: "bundle") pod "be420288-3c8c-46cd-8058-b091fc8a7eec" (UID: "be420288-3c8c-46cd-8058-b091fc8a7eec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.006646 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk" (OuterVolumeSpecName: "kube-api-access-2f4rk") pod "be420288-3c8c-46cd-8058-b091fc8a7eec" (UID: "be420288-3c8c-46cd-8058-b091fc8a7eec"). InnerVolumeSpecName "kube-api-access-2f4rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.014103 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util" (OuterVolumeSpecName: "util") pod "be420288-3c8c-46cd-8058-b091fc8a7eec" (UID: "be420288-3c8c-46cd-8058-b091fc8a7eec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.104058 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f4rk\" (UniqueName: \"kubernetes.io/projected/be420288-3c8c-46cd-8058-b091fc8a7eec-kube-api-access-2f4rk\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.104106 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.104118 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be420288-3c8c-46cd-8058-b091fc8a7eec-util\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.677918 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.678037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apkxz8" event={"ID":"be420288-3c8c-46cd-8058-b091fc8a7eec","Type":"ContainerDied","Data":"087f60b89e9c3bf70f0db40bd188ccc41371960bd28b5be9bc6386af1bd78af4"} Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.678591 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="087f60b89e9c3bf70f0db40bd188ccc41371960bd28b5be9bc6386af1bd78af4" Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.680616 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerStarted","Data":"49a0589138e43f276e0f073a6e8e9c9374aa56fb29ac6d7cf8aec2466adfc178"} Jan 26 22:55:29 crc kubenswrapper[4793]: I0126 22:55:29.703346 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k7chh" podStartSLOduration=3.030906137 podStartE2EDuration="5.703326181s" podCreationTimestamp="2026-01-26 22:55:24 +0000 UTC" firstStartedPulling="2026-01-26 22:55:26.647074179 +0000 UTC m=+941.635845691" lastFinishedPulling="2026-01-26 22:55:29.319494213 +0000 UTC m=+944.308265735" observedRunningTime="2026-01-26 22:55:29.700130101 +0000 UTC m=+944.688901653" watchObservedRunningTime="2026-01-26 22:55:29.703326181 +0000 UTC m=+944.692097703" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.353002 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9"] Jan 26 22:55:33 crc kubenswrapper[4793]: E0126 22:55:33.354769 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="pull" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.354793 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="pull" Jan 26 22:55:33 crc kubenswrapper[4793]: E0126 22:55:33.354818 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="util" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.354825 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="util" Jan 26 22:55:33 crc kubenswrapper[4793]: E0126 22:55:33.354838 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="extract" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.354845 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="extract" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.355081 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="be420288-3c8c-46cd-8058-b091fc8a7eec" containerName="extract" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.355641 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.359477 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.359522 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-n54sv" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.361236 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.369824 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9"] Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.455746 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.455836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg6hh\" (UniqueName: \"kubernetes.io/projected/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-kube-api-access-fg6hh\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.557028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.557796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg6hh\" (UniqueName: \"kubernetes.io/projected/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-kube-api-access-fg6hh\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.557723 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.586790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg6hh\" (UniqueName: \"kubernetes.io/projected/ac9b05a4-dacb-41bf-ade5-d28fdc391d9b-kube-api-access-fg6hh\") pod \"cert-manager-operator-controller-manager-64cf6dff88-p4jp9\" (UID: \"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.723036 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" Jan 26 22:55:33 crc kubenswrapper[4793]: I0126 22:55:33.948375 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9"] Jan 26 22:55:33 crc kubenswrapper[4793]: W0126 22:55:33.967313 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac9b05a4_dacb_41bf_ade5_d28fdc391d9b.slice/crio-c02ffd83a947299bf1aeebe202507d111fa8754aec1b25cbd167eeb946ea29df WatchSource:0}: Error finding container c02ffd83a947299bf1aeebe202507d111fa8754aec1b25cbd167eeb946ea29df: Status 404 returned error can't find the container with id c02ffd83a947299bf1aeebe202507d111fa8754aec1b25cbd167eeb946ea29df Jan 26 22:55:34 crc kubenswrapper[4793]: I0126 22:55:34.717284 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" event={"ID":"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b","Type":"ContainerStarted","Data":"c02ffd83a947299bf1aeebe202507d111fa8754aec1b25cbd167eeb946ea29df"} Jan 26 22:55:35 crc kubenswrapper[4793]: I0126 22:55:35.232776 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:35 crc kubenswrapper[4793]: I0126 22:55:35.233444 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:35 crc kubenswrapper[4793]: I0126 22:55:35.297648 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:35 crc kubenswrapper[4793]: I0126 22:55:35.768604 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:37 crc kubenswrapper[4793]: I0126 22:55:37.385315 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-66sdc" Jan 26 22:55:37 crc kubenswrapper[4793]: I0126 22:55:37.676427 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:38 crc kubenswrapper[4793]: I0126 22:55:38.749470 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k7chh" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="registry-server" containerID="cri-o://49a0589138e43f276e0f073a6e8e9c9374aa56fb29ac6d7cf8aec2466adfc178" gracePeriod=2 Jan 26 22:55:40 crc kubenswrapper[4793]: I0126 22:55:40.763468 4793 generic.go:334] "Generic (PLEG): container finished" podID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerID="49a0589138e43f276e0f073a6e8e9c9374aa56fb29ac6d7cf8aec2466adfc178" exitCode=0 Jan 26 22:55:40 crc kubenswrapper[4793]: I0126 22:55:40.763504 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerDied","Data":"49a0589138e43f276e0f073a6e8e9c9374aa56fb29ac6d7cf8aec2466adfc178"} Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.655064 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.770948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7chh" event={"ID":"489fdb41-1d60-4db7-a2de-a3829fc71c0c","Type":"ContainerDied","Data":"89b8de0329ef0472c524d45732bc1f1e1b3e7868092ebdc86e0346a811451edc"} Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.771005 4793 scope.go:117] "RemoveContainer" containerID="49a0589138e43f276e0f073a6e8e9c9374aa56fb29ac6d7cf8aec2466adfc178" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.771154 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7chh" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.774245 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" event={"ID":"ac9b05a4-dacb-41bf-ade5-d28fdc391d9b","Type":"ContainerStarted","Data":"8bb3993b16a13ea0ba1ab168af5aeed083b0fa0fc9c2902fe9734ff217b7a2e9"} Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.778438 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities\") pod \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.778497 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvmrp\" (UniqueName: \"kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp\") pod \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.778662 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content\") pod \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\" (UID: \"489fdb41-1d60-4db7-a2de-a3829fc71c0c\") " Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.779972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities" (OuterVolumeSpecName: "utilities") pod "489fdb41-1d60-4db7-a2de-a3829fc71c0c" (UID: "489fdb41-1d60-4db7-a2de-a3829fc71c0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.790507 4793 scope.go:117] "RemoveContainer" containerID="c71a16f468753e6b6592c9c3c29cfd096037e2a3a1698a31b7f14e70bfe4f910" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.801557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp" (OuterVolumeSpecName: "kube-api-access-zvmrp") pod "489fdb41-1d60-4db7-a2de-a3829fc71c0c" (UID: "489fdb41-1d60-4db7-a2de-a3829fc71c0c"). InnerVolumeSpecName "kube-api-access-zvmrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.802586 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-p4jp9" podStartSLOduration=1.3009408599999999 podStartE2EDuration="8.80256543s" podCreationTimestamp="2026-01-26 22:55:33 +0000 UTC" firstStartedPulling="2026-01-26 22:55:33.96914471 +0000 UTC m=+948.957916222" lastFinishedPulling="2026-01-26 22:55:41.47076927 +0000 UTC m=+956.459540792" observedRunningTime="2026-01-26 22:55:41.800504322 +0000 UTC m=+956.789275844" watchObservedRunningTime="2026-01-26 22:55:41.80256543 +0000 UTC m=+956.791336962" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.832635 4793 scope.go:117] "RemoveContainer" containerID="f180818f9da9a3ea1fa913a362222c8a8d7909c3db4aab4a331adb7d66ccf614" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.855318 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "489fdb41-1d60-4db7-a2de-a3829fc71c0c" (UID: "489fdb41-1d60-4db7-a2de-a3829fc71c0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.880596 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.880626 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489fdb41-1d60-4db7-a2de-a3829fc71c0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:41 crc kubenswrapper[4793]: I0126 22:55:41.880638 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvmrp\" (UniqueName: \"kubernetes.io/projected/489fdb41-1d60-4db7-a2de-a3829fc71c0c-kube-api-access-zvmrp\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:42 crc kubenswrapper[4793]: I0126 22:55:42.120933 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:42 crc kubenswrapper[4793]: I0126 22:55:42.125379 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k7chh"] Jan 26 22:55:43 crc kubenswrapper[4793]: I0126 22:55:43.769298 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" path="/var/lib/kubelet/pods/489fdb41-1d60-4db7-a2de-a3829fc71c0c/volumes" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.484036 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:44 crc kubenswrapper[4793]: E0126 22:55:44.484624 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="registry-server" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.484644 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="registry-server" Jan 26 22:55:44 crc kubenswrapper[4793]: E0126 22:55:44.484655 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="extract-content" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.484662 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="extract-content" Jan 26 22:55:44 crc kubenswrapper[4793]: E0126 22:55:44.484675 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="extract-utilities" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.484681 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="extract-utilities" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.484785 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="489fdb41-1d60-4db7-a2de-a3829fc71c0c" containerName="registry-server" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.485570 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.506997 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.617605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rx6\" (UniqueName: \"kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.617672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.617726 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.719400 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.719520 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rx6\" (UniqueName: \"kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.719548 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.720076 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.720458 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.742407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rx6\" (UniqueName: \"kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6\") pod \"redhat-marketplace-jdpc8\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:44 crc kubenswrapper[4793]: I0126 22:55:44.804353 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.280402 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.423100 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hdzqz"] Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.424164 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.425896 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-96qfn" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.428581 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.428880 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.428971 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6lqp\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-kube-api-access-p6lqp\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.429097 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.438487 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hdzqz"] Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.530160 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.530744 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6lqp\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-kube-api-access-p6lqp\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.550462 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6lqp\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-kube-api-access-p6lqp\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.551707 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07849a03-e9ab-4c1f-9157-cef15ba05e6b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hdzqz\" (UID: \"07849a03-e9ab-4c1f-9157-cef15ba05e6b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.741802 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-96qfn" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.751073 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.826380 4793 generic.go:334] "Generic (PLEG): container finished" podID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerID="b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845" exitCode=0 Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.826459 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerDied","Data":"b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845"} Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.826534 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerStarted","Data":"1016701f831451a8c8076fd22bf204f8d806deb0be35568f4235399f90ca0410"} Jan 26 22:55:45 crc kubenswrapper[4793]: I0126 22:55:45.942980 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hdzqz"] Jan 26 22:55:46 crc kubenswrapper[4793]: I0126 22:55:46.836808 4793 generic.go:334] "Generic (PLEG): container finished" podID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerID="0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732" exitCode=0 Jan 26 22:55:46 crc kubenswrapper[4793]: I0126 22:55:46.836920 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerDied","Data":"0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732"} Jan 26 22:55:46 crc kubenswrapper[4793]: I0126 22:55:46.838865 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" event={"ID":"07849a03-e9ab-4c1f-9157-cef15ba05e6b","Type":"ContainerStarted","Data":"f645b952011f28614efc2846c80acbfbd0b054285e1b20da6a86aee83dfa31a7"} Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.208308 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr"] Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.209482 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.215657 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fj7kv" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.230554 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr"] Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.278359 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.278417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxss\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-kube-api-access-ckxss\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.322915 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.322987 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.379925 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.379977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckxss\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-kube-api-access-ckxss\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.398041 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.398667 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckxss\" (UniqueName: \"kubernetes.io/projected/e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d-kube-api-access-ckxss\") pod \"cert-manager-cainjector-855d9ccff4-bn9fr\" (UID: \"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.527839 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.865683 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerStarted","Data":"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0"} Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.890441 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jdpc8" podStartSLOduration=2.3857182 podStartE2EDuration="4.890415587s" podCreationTimestamp="2026-01-26 22:55:44 +0000 UTC" firstStartedPulling="2026-01-26 22:55:45.828452043 +0000 UTC m=+960.817223555" lastFinishedPulling="2026-01-26 22:55:48.33314943 +0000 UTC m=+963.321920942" observedRunningTime="2026-01-26 22:55:48.889054218 +0000 UTC m=+963.877825730" watchObservedRunningTime="2026-01-26 22:55:48.890415587 +0000 UTC m=+963.879187099" Jan 26 22:55:48 crc kubenswrapper[4793]: I0126 22:55:48.935510 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr"] Jan 26 22:55:48 crc kubenswrapper[4793]: W0126 22:55:48.945220 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7a7f0bf_2977_4ed6_8925_8eb8171a8e1d.slice/crio-c343c9f32092023214dbddaedd6289208a329bdef0296de6229d7fbd1c6d442b WatchSource:0}: Error finding container c343c9f32092023214dbddaedd6289208a329bdef0296de6229d7fbd1c6d442b: Status 404 returned error can't find the container with id c343c9f32092023214dbddaedd6289208a329bdef0296de6229d7fbd1c6d442b Jan 26 22:55:49 crc kubenswrapper[4793]: I0126 22:55:49.874284 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" event={"ID":"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d","Type":"ContainerStarted","Data":"c343c9f32092023214dbddaedd6289208a329bdef0296de6229d7fbd1c6d442b"} Jan 26 22:55:54 crc kubenswrapper[4793]: I0126 22:55:54.805325 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:54 crc kubenswrapper[4793]: I0126 22:55:54.805730 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:54 crc kubenswrapper[4793]: I0126 22:55:54.871643 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:54 crc kubenswrapper[4793]: I0126 22:55:54.968984 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:55 crc kubenswrapper[4793]: I0126 22:55:55.933787 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" event={"ID":"07849a03-e9ab-4c1f-9157-cef15ba05e6b","Type":"ContainerStarted","Data":"14f8735381b7c3ddb190b3640f28ebbfa2e06990c1529cfad31897c54b520b02"} Jan 26 22:55:55 crc kubenswrapper[4793]: I0126 22:55:55.933869 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:55:55 crc kubenswrapper[4793]: I0126 22:55:55.937022 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" event={"ID":"e7a7f0bf-2977-4ed6-8925-8eb8171a8e1d","Type":"ContainerStarted","Data":"aa6a53fef38b20535c55cb0b66ff94f2caa3f80be60ab2dfe5e2f5499fd35aff"} Jan 26 22:55:55 crc kubenswrapper[4793]: I0126 22:55:55.967041 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" podStartSLOduration=2.017948573 podStartE2EDuration="10.967021466s" podCreationTimestamp="2026-01-26 22:55:45 +0000 UTC" firstStartedPulling="2026-01-26 22:55:45.952074434 +0000 UTC m=+960.940845946" lastFinishedPulling="2026-01-26 22:55:54.901147327 +0000 UTC m=+969.889918839" observedRunningTime="2026-01-26 22:55:55.963547138 +0000 UTC m=+970.952318650" watchObservedRunningTime="2026-01-26 22:55:55.967021466 +0000 UTC m=+970.955792978" Jan 26 22:55:55 crc kubenswrapper[4793]: I0126 22:55:55.987152 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bn9fr" podStartSLOduration=2.031990063 podStartE2EDuration="7.987128524s" podCreationTimestamp="2026-01-26 22:55:48 +0000 UTC" firstStartedPulling="2026-01-26 22:55:48.947843178 +0000 UTC m=+963.936614690" lastFinishedPulling="2026-01-26 22:55:54.902981639 +0000 UTC m=+969.891753151" observedRunningTime="2026-01-26 22:55:55.982282157 +0000 UTC m=+970.971053669" watchObservedRunningTime="2026-01-26 22:55:55.987128524 +0000 UTC m=+970.975900036" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.274413 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.274667 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jdpc8" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="registry-server" containerID="cri-o://a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0" gracePeriod=2 Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.767291 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.938254 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities\") pod \"e13b3efd-5989-4b1b-a165-5bb471b1072c\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.939535 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content\") pod \"e13b3efd-5989-4b1b-a165-5bb471b1072c\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.941220 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities" (OuterVolumeSpecName: "utilities") pod "e13b3efd-5989-4b1b-a165-5bb471b1072c" (UID: "e13b3efd-5989-4b1b-a165-5bb471b1072c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.951557 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rx6\" (UniqueName: \"kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6\") pod \"e13b3efd-5989-4b1b-a165-5bb471b1072c\" (UID: \"e13b3efd-5989-4b1b-a165-5bb471b1072c\") " Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.953315 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.965671 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6" (OuterVolumeSpecName: "kube-api-access-p4rx6") pod "e13b3efd-5989-4b1b-a165-5bb471b1072c" (UID: "e13b3efd-5989-4b1b-a165-5bb471b1072c"). InnerVolumeSpecName "kube-api-access-p4rx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.966984 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e13b3efd-5989-4b1b-a165-5bb471b1072c" (UID: "e13b3efd-5989-4b1b-a165-5bb471b1072c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.973619 4793 generic.go:334] "Generic (PLEG): container finished" podID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerID="a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0" exitCode=0 Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.973746 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerDied","Data":"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0"} Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.973814 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdpc8" Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.973883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdpc8" event={"ID":"e13b3efd-5989-4b1b-a165-5bb471b1072c","Type":"ContainerDied","Data":"1016701f831451a8c8076fd22bf204f8d806deb0be35568f4235399f90ca0410"} Jan 26 22:55:58 crc kubenswrapper[4793]: I0126 22:55:58.973962 4793 scope.go:117] "RemoveContainer" containerID="a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.003986 4793 scope.go:117] "RemoveContainer" containerID="0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.021170 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.033966 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdpc8"] Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.043218 4793 scope.go:117] "RemoveContainer" containerID="b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.055121 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rx6\" (UniqueName: \"kubernetes.io/projected/e13b3efd-5989-4b1b-a165-5bb471b1072c-kube-api-access-p4rx6\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.055164 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13b3efd-5989-4b1b-a165-5bb471b1072c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.062604 4793 scope.go:117] "RemoveContainer" containerID="a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0" Jan 26 22:55:59 crc kubenswrapper[4793]: E0126 22:55:59.063076 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0\": container with ID starting with a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0 not found: ID does not exist" containerID="a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.063123 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0"} err="failed to get container status \"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0\": rpc error: code = NotFound desc = could not find container \"a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0\": container with ID starting with a2ae5dbe596716dd2f2d875a5bc5eb5752431f3e98260fbb0e82d5aab2a195b0 not found: ID does not exist" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.063159 4793 scope.go:117] "RemoveContainer" containerID="0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732" Jan 26 22:55:59 crc kubenswrapper[4793]: E0126 22:55:59.063721 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732\": container with ID starting with 0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732 not found: ID does not exist" containerID="0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.063748 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732"} err="failed to get container status \"0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732\": rpc error: code = NotFound desc = could not find container \"0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732\": container with ID starting with 0658eb9b35d127fe16a9703c0e1b463e2ddee208e2f7f661030392adaf014732 not found: ID does not exist" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.063770 4793 scope.go:117] "RemoveContainer" containerID="b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845" Jan 26 22:55:59 crc kubenswrapper[4793]: E0126 22:55:59.064120 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845\": container with ID starting with b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845 not found: ID does not exist" containerID="b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.064164 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845"} err="failed to get container status \"b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845\": rpc error: code = NotFound desc = could not find container \"b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845\": container with ID starting with b62d2910349e90dab8ff7e1efde3f449c8b1c39782b9151e3bb0cc91c81a2845 not found: ID does not exist" Jan 26 22:55:59 crc kubenswrapper[4793]: I0126 22:55:59.772750 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" path="/var/lib/kubelet/pods/e13b3efd-5989-4b1b-a165-5bb471b1072c/volumes" Jan 26 22:56:00 crc kubenswrapper[4793]: I0126 22:56:00.756998 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-hdzqz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.536924 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9gdgz"] Jan 26 22:56:04 crc kubenswrapper[4793]: E0126 22:56:04.537623 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="extract-utilities" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.537641 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="extract-utilities" Jan 26 22:56:04 crc kubenswrapper[4793]: E0126 22:56:04.537657 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="extract-content" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.537665 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="extract-content" Jan 26 22:56:04 crc kubenswrapper[4793]: E0126 22:56:04.537682 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="registry-server" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.537689 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="registry-server" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.537814 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13b3efd-5989-4b1b-a165-5bb471b1072c" containerName="registry-server" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.538296 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.541167 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-56kh4" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.546569 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9gdgz"] Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.556562 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqpbd\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-kube-api-access-sqpbd\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.556683 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-bound-sa-token\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.657094 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-bound-sa-token\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.657482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqpbd\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-kube-api-access-sqpbd\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.676068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-bound-sa-token\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.682013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqpbd\" (UniqueName: \"kubernetes.io/projected/e2b3e9b2-2f1c-415c-88e8-65168a6d0f37-kube-api-access-sqpbd\") pod \"cert-manager-86cb77c54b-9gdgz\" (UID: \"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37\") " pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:04 crc kubenswrapper[4793]: I0126 22:56:04.886580 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-9gdgz" Jan 26 22:56:05 crc kubenswrapper[4793]: I0126 22:56:05.124772 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9gdgz"] Jan 26 22:56:06 crc kubenswrapper[4793]: I0126 22:56:06.036450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-9gdgz" event={"ID":"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37","Type":"ContainerStarted","Data":"f5ba0f2c15e6402fe13b2c85e4a396b70035d4818a596b077e3ce6aedaf8a296"} Jan 26 22:56:07 crc kubenswrapper[4793]: I0126 22:56:07.047100 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-9gdgz" event={"ID":"e2b3e9b2-2f1c-415c-88e8-65168a6d0f37","Type":"ContainerStarted","Data":"b91c96405b7710da1c629eb43fb767d0760fc199571ce4ce7b16417b602770a1"} Jan 26 22:56:07 crc kubenswrapper[4793]: I0126 22:56:07.074100 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-9gdgz" podStartSLOduration=3.074064446 podStartE2EDuration="3.074064446s" podCreationTimestamp="2026-01-26 22:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:56:07.069283261 +0000 UTC m=+982.058054813" watchObservedRunningTime="2026-01-26 22:56:07.074064446 +0000 UTC m=+982.062835998" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.322509 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.323390 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.323481 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.324544 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.324634 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c" gracePeriod=600 Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.688098 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.689353 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.691099 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmdb\" (UniqueName: \"kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb\") pod \"openstack-operator-index-cvp27\" (UID: \"52221c9f-0f2d-4dca-8a2b-92199673793a\") " pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.691601 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.692732 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kwftg" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.695618 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.711756 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.792706 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmdb\" (UniqueName: \"kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb\") pod \"openstack-operator-index-cvp27\" (UID: \"52221c9f-0f2d-4dca-8a2b-92199673793a\") " pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:18 crc kubenswrapper[4793]: I0126 22:56:18.817227 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmdb\" (UniqueName: \"kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb\") pod \"openstack-operator-index-cvp27\" (UID: \"52221c9f-0f2d-4dca-8a2b-92199673793a\") " pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.014231 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.142623 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c" exitCode=0 Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.142835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c"} Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.143180 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587"} Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.143263 4793 scope.go:117] "RemoveContainer" containerID="f579552814843c2cef89c29637b851bbb824f6610b4162a130c2d08c83775a37" Jan 26 22:56:19 crc kubenswrapper[4793]: I0126 22:56:19.255092 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:19 crc kubenswrapper[4793]: W0126 22:56:19.258557 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52221c9f_0f2d_4dca_8a2b_92199673793a.slice/crio-da94b4d032f6418c433292d7c4a31a15fc5dbe734a3926d2791523b0d192d961 WatchSource:0}: Error finding container da94b4d032f6418c433292d7c4a31a15fc5dbe734a3926d2791523b0d192d961: Status 404 returned error can't find the container with id da94b4d032f6418c433292d7c4a31a15fc5dbe734a3926d2791523b0d192d961 Jan 26 22:56:20 crc kubenswrapper[4793]: I0126 22:56:20.158975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvp27" event={"ID":"52221c9f-0f2d-4dca-8a2b-92199673793a","Type":"ContainerStarted","Data":"da94b4d032f6418c433292d7c4a31a15fc5dbe734a3926d2791523b0d192d961"} Jan 26 22:56:22 crc kubenswrapper[4793]: I0126 22:56:22.182709 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvp27" event={"ID":"52221c9f-0f2d-4dca-8a2b-92199673793a","Type":"ContainerStarted","Data":"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050"} Jan 26 22:56:22 crc kubenswrapper[4793]: I0126 22:56:22.208681 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cvp27" podStartSLOduration=2.079958187 podStartE2EDuration="4.208655767s" podCreationTimestamp="2026-01-26 22:56:18 +0000 UTC" firstStartedPulling="2026-01-26 22:56:19.260677052 +0000 UTC m=+994.249448564" lastFinishedPulling="2026-01-26 22:56:21.389374632 +0000 UTC m=+996.378146144" observedRunningTime="2026-01-26 22:56:22.205076556 +0000 UTC m=+997.193848098" watchObservedRunningTime="2026-01-26 22:56:22.208655767 +0000 UTC m=+997.197427309" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.082311 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.199913 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-cvp27" podUID="52221c9f-0f2d-4dca-8a2b-92199673793a" containerName="registry-server" containerID="cri-o://b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050" gracePeriod=2 Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.679754 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-j9sn2"] Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.681578 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.731498 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctf5k\" (UniqueName: \"kubernetes.io/projected/b2f6b1be-cc0d-47e0-8375-704e63bf5a2b-kube-api-access-ctf5k\") pod \"openstack-operator-index-j9sn2\" (UID: \"b2f6b1be-cc0d-47e0-8375-704e63bf5a2b\") " pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.738827 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j9sn2"] Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.740954 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.833037 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tmdb\" (UniqueName: \"kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb\") pod \"52221c9f-0f2d-4dca-8a2b-92199673793a\" (UID: \"52221c9f-0f2d-4dca-8a2b-92199673793a\") " Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.833347 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctf5k\" (UniqueName: \"kubernetes.io/projected/b2f6b1be-cc0d-47e0-8375-704e63bf5a2b-kube-api-access-ctf5k\") pod \"openstack-operator-index-j9sn2\" (UID: \"b2f6b1be-cc0d-47e0-8375-704e63bf5a2b\") " pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.840355 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb" (OuterVolumeSpecName: "kube-api-access-8tmdb") pod "52221c9f-0f2d-4dca-8a2b-92199673793a" (UID: "52221c9f-0f2d-4dca-8a2b-92199673793a"). InnerVolumeSpecName "kube-api-access-8tmdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.855015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctf5k\" (UniqueName: \"kubernetes.io/projected/b2f6b1be-cc0d-47e0-8375-704e63bf5a2b-kube-api-access-ctf5k\") pod \"openstack-operator-index-j9sn2\" (UID: \"b2f6b1be-cc0d-47e0-8375-704e63bf5a2b\") " pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:24 crc kubenswrapper[4793]: I0126 22:56:24.935251 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tmdb\" (UniqueName: \"kubernetes.io/projected/52221c9f-0f2d-4dca-8a2b-92199673793a-kube-api-access-8tmdb\") on node \"crc\" DevicePath \"\"" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.056653 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.211575 4793 generic.go:334] "Generic (PLEG): container finished" podID="52221c9f-0f2d-4dca-8a2b-92199673793a" containerID="b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050" exitCode=0 Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.211640 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvp27" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.211674 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvp27" event={"ID":"52221c9f-0f2d-4dca-8a2b-92199673793a","Type":"ContainerDied","Data":"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050"} Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.211794 4793 scope.go:117] "RemoveContainer" containerID="b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.211772 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvp27" event={"ID":"52221c9f-0f2d-4dca-8a2b-92199673793a","Type":"ContainerDied","Data":"da94b4d032f6418c433292d7c4a31a15fc5dbe734a3926d2791523b0d192d961"} Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.268596 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.269273 4793 scope.go:117] "RemoveContainer" containerID="b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050" Jan 26 22:56:25 crc kubenswrapper[4793]: E0126 22:56:25.270102 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050\": container with ID starting with b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050 not found: ID does not exist" containerID="b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.270155 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050"} err="failed to get container status \"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050\": rpc error: code = NotFound desc = could not find container \"b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050\": container with ID starting with b75ec33dfafd69d81d952c0681ba329636edb70785d84875c8be883245b50050 not found: ID does not exist" Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.280790 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-cvp27"] Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.545756 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j9sn2"] Jan 26 22:56:25 crc kubenswrapper[4793]: W0126 22:56:25.550939 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2f6b1be_cc0d_47e0_8375_704e63bf5a2b.slice/crio-4b40c03b3b8d3943c90d1be4e0bbbdb57940ef5727dc2b8cccefb384d71c3655 WatchSource:0}: Error finding container 4b40c03b3b8d3943c90d1be4e0bbbdb57940ef5727dc2b8cccefb384d71c3655: Status 404 returned error can't find the container with id 4b40c03b3b8d3943c90d1be4e0bbbdb57940ef5727dc2b8cccefb384d71c3655 Jan 26 22:56:25 crc kubenswrapper[4793]: I0126 22:56:25.769973 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52221c9f-0f2d-4dca-8a2b-92199673793a" path="/var/lib/kubelet/pods/52221c9f-0f2d-4dca-8a2b-92199673793a/volumes" Jan 26 22:56:26 crc kubenswrapper[4793]: I0126 22:56:26.221844 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j9sn2" event={"ID":"b2f6b1be-cc0d-47e0-8375-704e63bf5a2b","Type":"ContainerStarted","Data":"4b40c03b3b8d3943c90d1be4e0bbbdb57940ef5727dc2b8cccefb384d71c3655"} Jan 26 22:56:27 crc kubenswrapper[4793]: I0126 22:56:27.233811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j9sn2" event={"ID":"b2f6b1be-cc0d-47e0-8375-704e63bf5a2b","Type":"ContainerStarted","Data":"a4b586543ce07e0ad7433ddd442b27859c07e41fa8d29adaeb1b73bdffeae014"} Jan 26 22:56:27 crc kubenswrapper[4793]: I0126 22:56:27.256795 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-j9sn2" podStartSLOduration=2.452412491 podStartE2EDuration="3.256772675s" podCreationTimestamp="2026-01-26 22:56:24 +0000 UTC" firstStartedPulling="2026-01-26 22:56:25.554347972 +0000 UTC m=+1000.543119484" lastFinishedPulling="2026-01-26 22:56:26.358708156 +0000 UTC m=+1001.347479668" observedRunningTime="2026-01-26 22:56:27.252165685 +0000 UTC m=+1002.240937237" watchObservedRunningTime="2026-01-26 22:56:27.256772675 +0000 UTC m=+1002.245544187" Jan 26 22:56:35 crc kubenswrapper[4793]: I0126 22:56:35.056975 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:35 crc kubenswrapper[4793]: I0126 22:56:35.058026 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:35 crc kubenswrapper[4793]: I0126 22:56:35.096059 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:35 crc kubenswrapper[4793]: I0126 22:56:35.331050 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-j9sn2" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.153682 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf"] Jan 26 22:56:37 crc kubenswrapper[4793]: E0126 22:56:37.154822 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52221c9f-0f2d-4dca-8a2b-92199673793a" containerName="registry-server" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.154855 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="52221c9f-0f2d-4dca-8a2b-92199673793a" containerName="registry-server" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.155122 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="52221c9f-0f2d-4dca-8a2b-92199673793a" containerName="registry-server" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.157079 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.161751 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-97km8" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.169294 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf"] Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.332753 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.332794 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vjrr\" (UniqueName: \"kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.332855 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.434172 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.434232 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vjrr\" (UniqueName: \"kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.434286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.434795 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.434849 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.457644 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vjrr\" (UniqueName: \"kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr\") pod \"9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.530894 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:37 crc kubenswrapper[4793]: I0126 22:56:37.745662 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf"] Jan 26 22:56:38 crc kubenswrapper[4793]: I0126 22:56:38.330028 4793 generic.go:334] "Generic (PLEG): container finished" podID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerID="3ebc6d238faa7316ad00ace81fc8416b6a8e4b12e5066d0517ece15124d92e53" exitCode=0 Jan 26 22:56:38 crc kubenswrapper[4793]: I0126 22:56:38.330256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" event={"ID":"fcc64fc2-d867-4a08-8df7-0464de3c133e","Type":"ContainerDied","Data":"3ebc6d238faa7316ad00ace81fc8416b6a8e4b12e5066d0517ece15124d92e53"} Jan 26 22:56:38 crc kubenswrapper[4793]: I0126 22:56:38.331859 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" event={"ID":"fcc64fc2-d867-4a08-8df7-0464de3c133e","Type":"ContainerStarted","Data":"247f41f3a3cf823a6d43a3ea44330009eff99966da1a98a548112762fa16ab18"} Jan 26 22:56:39 crc kubenswrapper[4793]: I0126 22:56:39.345748 4793 generic.go:334] "Generic (PLEG): container finished" podID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerID="77e9f2a55841c6f32c6ebcee84cfca82631d548703243564d2625739d6cd45a2" exitCode=0 Jan 26 22:56:39 crc kubenswrapper[4793]: I0126 22:56:39.345826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" event={"ID":"fcc64fc2-d867-4a08-8df7-0464de3c133e","Type":"ContainerDied","Data":"77e9f2a55841c6f32c6ebcee84cfca82631d548703243564d2625739d6cd45a2"} Jan 26 22:56:40 crc kubenswrapper[4793]: I0126 22:56:40.356113 4793 generic.go:334] "Generic (PLEG): container finished" podID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerID="aa64eebc8908f22f6ef5b8bd88933134c568a3b0aadc2266bc47dbe357678b2f" exitCode=0 Jan 26 22:56:40 crc kubenswrapper[4793]: I0126 22:56:40.356164 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" event={"ID":"fcc64fc2-d867-4a08-8df7-0464de3c133e","Type":"ContainerDied","Data":"aa64eebc8908f22f6ef5b8bd88933134c568a3b0aadc2266bc47dbe357678b2f"} Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.665172 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.819185 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle\") pod \"fcc64fc2-d867-4a08-8df7-0464de3c133e\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.819479 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vjrr\" (UniqueName: \"kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr\") pod \"fcc64fc2-d867-4a08-8df7-0464de3c133e\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.819555 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util\") pod \"fcc64fc2-d867-4a08-8df7-0464de3c133e\" (UID: \"fcc64fc2-d867-4a08-8df7-0464de3c133e\") " Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.821110 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle" (OuterVolumeSpecName: "bundle") pod "fcc64fc2-d867-4a08-8df7-0464de3c133e" (UID: "fcc64fc2-d867-4a08-8df7-0464de3c133e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.828583 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr" (OuterVolumeSpecName: "kube-api-access-4vjrr") pod "fcc64fc2-d867-4a08-8df7-0464de3c133e" (UID: "fcc64fc2-d867-4a08-8df7-0464de3c133e"). InnerVolumeSpecName "kube-api-access-4vjrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.856477 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util" (OuterVolumeSpecName: "util") pod "fcc64fc2-d867-4a08-8df7-0464de3c133e" (UID: "fcc64fc2-d867-4a08-8df7-0464de3c133e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.922058 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.922118 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vjrr\" (UniqueName: \"kubernetes.io/projected/fcc64fc2-d867-4a08-8df7-0464de3c133e-kube-api-access-4vjrr\") on node \"crc\" DevicePath \"\"" Jan 26 22:56:41 crc kubenswrapper[4793]: I0126 22:56:41.922139 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcc64fc2-d867-4a08-8df7-0464de3c133e-util\") on node \"crc\" DevicePath \"\"" Jan 26 22:56:42 crc kubenswrapper[4793]: I0126 22:56:42.379411 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" event={"ID":"fcc64fc2-d867-4a08-8df7-0464de3c133e","Type":"ContainerDied","Data":"247f41f3a3cf823a6d43a3ea44330009eff99966da1a98a548112762fa16ab18"} Jan 26 22:56:42 crc kubenswrapper[4793]: I0126 22:56:42.379492 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="247f41f3a3cf823a6d43a3ea44330009eff99966da1a98a548112762fa16ab18" Jan 26 22:56:42 crc kubenswrapper[4793]: I0126 22:56:42.379530 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.569090 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 22:56:44 crc kubenswrapper[4793]: E0126 22:56:44.569742 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="pull" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.569756 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="pull" Jan 26 22:56:44 crc kubenswrapper[4793]: E0126 22:56:44.569763 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="util" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.569770 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="util" Jan 26 22:56:44 crc kubenswrapper[4793]: E0126 22:56:44.569795 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="extract" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.569802 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="extract" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.569908 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc64fc2-d867-4a08-8df7-0464de3c133e" containerName="extract" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.570334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.572916 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-stxpn" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.602270 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.767545 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnkk7\" (UniqueName: \"kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7\") pod \"openstack-operator-controller-init-844f6594fb-khtqs\" (UID: \"3a8dce43-2d79-4865-b17b-c73d6809865d\") " pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.868928 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnkk7\" (UniqueName: \"kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7\") pod \"openstack-operator-controller-init-844f6594fb-khtqs\" (UID: \"3a8dce43-2d79-4865-b17b-c73d6809865d\") " pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:44 crc kubenswrapper[4793]: I0126 22:56:44.899112 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnkk7\" (UniqueName: \"kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7\") pod \"openstack-operator-controller-init-844f6594fb-khtqs\" (UID: \"3a8dce43-2d79-4865-b17b-c73d6809865d\") " pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:45 crc kubenswrapper[4793]: I0126 22:56:45.187910 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:45 crc kubenswrapper[4793]: I0126 22:56:45.710665 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 22:56:46 crc kubenswrapper[4793]: I0126 22:56:46.414101 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" event={"ID":"3a8dce43-2d79-4865-b17b-c73d6809865d","Type":"ContainerStarted","Data":"e948222ad969867dfca2ac3068c15d29f9c21e465ef410e50da6d8c9a179da98"} Jan 26 22:56:50 crc kubenswrapper[4793]: I0126 22:56:50.483899 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" event={"ID":"3a8dce43-2d79-4865-b17b-c73d6809865d","Type":"ContainerStarted","Data":"4ca2ce24aa7792f3f194f47af65c5c2a24088c6bd8e2fe97ed52b89cbf67652d"} Jan 26 22:56:50 crc kubenswrapper[4793]: I0126 22:56:50.484471 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:56:50 crc kubenswrapper[4793]: I0126 22:56:50.542583 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" podStartSLOduration=2.8115100809999998 podStartE2EDuration="6.542561039s" podCreationTimestamp="2026-01-26 22:56:44 +0000 UTC" firstStartedPulling="2026-01-26 22:56:45.722091778 +0000 UTC m=+1020.710863290" lastFinishedPulling="2026-01-26 22:56:49.453142746 +0000 UTC m=+1024.441914248" observedRunningTime="2026-01-26 22:56:50.529210992 +0000 UTC m=+1025.517982504" watchObservedRunningTime="2026-01-26 22:56:50.542561039 +0000 UTC m=+1025.531332561" Jan 26 22:56:55 crc kubenswrapper[4793]: I0126 22:56:55.191504 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.305210 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-542qx"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.306894 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.310925 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-v44tz" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.347413 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-542qx"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.397042 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.398132 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.404715 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lsvqx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.408491 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.414721 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.415801 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.419328 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kl6l4" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.419764 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.420951 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.423938 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xrlt8" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.425249 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.426230 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.426377 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxstf\" (UniqueName: \"kubernetes.io/projected/71c7d600-2dac-4e19-a33f-0311a8342774-kube-api-access-gxstf\") pod \"barbican-operator-controller-manager-6987f66698-542qx\" (UID: \"71c7d600-2dac-4e19-a33f-0311a8342774\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.430467 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.431589 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.437764 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.438407 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-tmbhr" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.438735 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-49lns" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.443275 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.453616 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.454782 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.456659 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.457042 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lxnhg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.480487 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.496480 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531176 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tjx\" (UniqueName: \"kubernetes.io/projected/58fa089e-6ccd-4521-88ff-e65e6928b738-kube-api-access-68tjx\") pod \"cinder-operator-controller-manager-655bf9cfbb-dgk2z\" (UID: \"58fa089e-6ccd-4521-88ff-e65e6928b738\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531249 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gr5s\" (UniqueName: \"kubernetes.io/projected/42a387a4-faad-41fe-bfa9-0f600a06e6e0-kube-api-access-4gr5s\") pod \"horizon-operator-controller-manager-77d5c5b54f-vkwjg\" (UID: \"42a387a4-faad-41fe-bfa9-0f600a06e6e0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg46x\" (UniqueName: \"kubernetes.io/projected/f15117fc-93f9-498b-b831-e87094aa991e-kube-api-access-vg46x\") pod \"glance-operator-controller-manager-67dd55ff59-cpwtx\" (UID: \"f15117fc-93f9-498b-b831-e87094aa991e\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531303 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg2zf\" (UniqueName: \"kubernetes.io/projected/18b41715-6c23-4821-b0da-1c17b5010375-kube-api-access-fg2zf\") pod \"heat-operator-controller-manager-954b94f75-fnjqt\" (UID: \"18b41715-6c23-4821-b0da-1c17b5010375\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531349 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrt7k\" (UniqueName: \"kubernetes.io/projected/eb8ed64e-38aa-4e1f-be80-29be415125fd-kube-api-access-xrt7k\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531372 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfknp\" (UniqueName: \"kubernetes.io/projected/4190249d-f34c-4b1a-a4da-038f7a806fc6-kube-api-access-wfknp\") pod \"designate-operator-controller-manager-77554cdc5c-vnt9q\" (UID: \"4190249d-f34c-4b1a-a4da-038f7a806fc6\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531413 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxstf\" (UniqueName: \"kubernetes.io/projected/71c7d600-2dac-4e19-a33f-0311a8342774-kube-api-access-gxstf\") pod \"barbican-operator-controller-manager-6987f66698-542qx\" (UID: \"71c7d600-2dac-4e19-a33f-0311a8342774\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.531446 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.543250 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.582589 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.583384 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.583520 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.585687 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxstf\" (UniqueName: \"kubernetes.io/projected/71c7d600-2dac-4e19-a33f-0311a8342774-kube-api-access-gxstf\") pod \"barbican-operator-controller-manager-6987f66698-542qx\" (UID: \"71c7d600-2dac-4e19-a33f-0311a8342774\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.591591 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kw6rf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.595164 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.596283 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.598937 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6gjfm" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.612541 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.629241 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.630378 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.633848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.633913 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tjx\" (UniqueName: \"kubernetes.io/projected/58fa089e-6ccd-4521-88ff-e65e6928b738-kube-api-access-68tjx\") pod \"cinder-operator-controller-manager-655bf9cfbb-dgk2z\" (UID: \"58fa089e-6ccd-4521-88ff-e65e6928b738\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.633941 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gr5s\" (UniqueName: \"kubernetes.io/projected/42a387a4-faad-41fe-bfa9-0f600a06e6e0-kube-api-access-4gr5s\") pod \"horizon-operator-controller-manager-77d5c5b54f-vkwjg\" (UID: \"42a387a4-faad-41fe-bfa9-0f600a06e6e0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.633960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg46x\" (UniqueName: \"kubernetes.io/projected/f15117fc-93f9-498b-b831-e87094aa991e-kube-api-access-vg46x\") pod \"glance-operator-controller-manager-67dd55ff59-cpwtx\" (UID: \"f15117fc-93f9-498b-b831-e87094aa991e\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.633984 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg2zf\" (UniqueName: \"kubernetes.io/projected/18b41715-6c23-4821-b0da-1c17b5010375-kube-api-access-fg2zf\") pod \"heat-operator-controller-manager-954b94f75-fnjqt\" (UID: \"18b41715-6c23-4821-b0da-1c17b5010375\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.634015 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm9g2\" (UniqueName: \"kubernetes.io/projected/22d5cae5-26fb-4a47-97ac-78ba7120d29c-kube-api-access-rm9g2\") pod \"keystone-operator-controller-manager-55f684fd56-g2r87\" (UID: \"22d5cae5-26fb-4a47-97ac-78ba7120d29c\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.634036 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dzf9\" (UniqueName: \"kubernetes.io/projected/bcf4192a-ff9b-4b02-83d0-a17ecd3ba795-kube-api-access-8dzf9\") pod \"ironic-operator-controller-manager-768b776ffb-7twhn\" (UID: \"bcf4192a-ff9b-4b02-83d0-a17ecd3ba795\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.634069 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrt7k\" (UniqueName: \"kubernetes.io/projected/eb8ed64e-38aa-4e1f-be80-29be415125fd-kube-api-access-xrt7k\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.634094 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfknp\" (UniqueName: \"kubernetes.io/projected/4190249d-f34c-4b1a-a4da-038f7a806fc6-kube-api-access-wfknp\") pod \"designate-operator-controller-manager-77554cdc5c-vnt9q\" (UID: \"4190249d-f34c-4b1a-a4da-038f7a806fc6\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:13 crc kubenswrapper[4793]: E0126 22:57:13.634563 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:13 crc kubenswrapper[4793]: E0126 22:57:13.634643 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert podName:eb8ed64e-38aa-4e1f-be80-29be415125fd nodeName:}" failed. No retries permitted until 2026-01-26 22:57:14.134617181 +0000 UTC m=+1049.123388693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert") pod "infra-operator-controller-manager-7d75bc88d5-8zm8h" (UID: "eb8ed64e-38aa-4e1f-be80-29be415125fd") : secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.638865 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-874vv" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.657696 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.658907 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.661699 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8d7wf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.671220 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gr5s\" (UniqueName: \"kubernetes.io/projected/42a387a4-faad-41fe-bfa9-0f600a06e6e0-kube-api-access-4gr5s\") pod \"horizon-operator-controller-manager-77d5c5b54f-vkwjg\" (UID: \"42a387a4-faad-41fe-bfa9-0f600a06e6e0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.676366 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfknp\" (UniqueName: \"kubernetes.io/projected/4190249d-f34c-4b1a-a4da-038f7a806fc6-kube-api-access-wfknp\") pod \"designate-operator-controller-manager-77554cdc5c-vnt9q\" (UID: \"4190249d-f34c-4b1a-a4da-038f7a806fc6\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.680373 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.689499 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.707797 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.709149 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrt7k\" (UniqueName: \"kubernetes.io/projected/eb8ed64e-38aa-4e1f-be80-29be415125fd-kube-api-access-xrt7k\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.709981 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg2zf\" (UniqueName: \"kubernetes.io/projected/18b41715-6c23-4821-b0da-1c17b5010375-kube-api-access-fg2zf\") pod \"heat-operator-controller-manager-954b94f75-fnjqt\" (UID: \"18b41715-6c23-4821-b0da-1c17b5010375\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.710277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg46x\" (UniqueName: \"kubernetes.io/projected/f15117fc-93f9-498b-b831-e87094aa991e-kube-api-access-vg46x\") pod \"glance-operator-controller-manager-67dd55ff59-cpwtx\" (UID: \"f15117fc-93f9-498b-b831-e87094aa991e\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.710745 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tjx\" (UniqueName: \"kubernetes.io/projected/58fa089e-6ccd-4521-88ff-e65e6928b738-kube-api-access-68tjx\") pod \"cinder-operator-controller-manager-655bf9cfbb-dgk2z\" (UID: \"58fa089e-6ccd-4521-88ff-e65e6928b738\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.719163 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.720154 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.722427 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.725696 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lwd7b" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.739918 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9m8m\" (UniqueName: \"kubernetes.io/projected/da1ca3d4-2bae-4133-88c8-b67f74ebc6ab-kube-api-access-l9m8m\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf\" (UID: \"da1ca3d4-2bae-4133-88c8-b67f74ebc6ab\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.739968 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm9g2\" (UniqueName: \"kubernetes.io/projected/22d5cae5-26fb-4a47-97ac-78ba7120d29c-kube-api-access-rm9g2\") pod \"keystone-operator-controller-manager-55f684fd56-g2r87\" (UID: \"22d5cae5-26fb-4a47-97ac-78ba7120d29c\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.739989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dzf9\" (UniqueName: \"kubernetes.io/projected/bcf4192a-ff9b-4b02-83d0-a17ecd3ba795-kube-api-access-8dzf9\") pod \"ironic-operator-controller-manager-768b776ffb-7twhn\" (UID: \"bcf4192a-ff9b-4b02-83d0-a17ecd3ba795\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.740041 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pshpw\" (UniqueName: \"kubernetes.io/projected/c6fcff5b-7bdb-4607-bd4d-389c863ddba6-kube-api-access-pshpw\") pod \"manila-operator-controller-manager-849fcfbb6b-flkdn\" (UID: \"c6fcff5b-7bdb-4607-bd4d-389c863ddba6\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.740454 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.745419 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.746398 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.747856 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6mfk4" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.764220 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm9g2\" (UniqueName: \"kubernetes.io/projected/22d5cae5-26fb-4a47-97ac-78ba7120d29c-kube-api-access-rm9g2\") pod \"keystone-operator-controller-manager-55f684fd56-g2r87\" (UID: \"22d5cae5-26fb-4a47-97ac-78ba7120d29c\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.775759 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.778561 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.783221 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dzf9\" (UniqueName: \"kubernetes.io/projected/bcf4192a-ff9b-4b02-83d0-a17ecd3ba795-kube-api-access-8dzf9\") pod \"ironic-operator-controller-manager-768b776ffb-7twhn\" (UID: \"bcf4192a-ff9b-4b02-83d0-a17ecd3ba795\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.788127 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.798463 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.798603 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.801078 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.809463 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zkdqv" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.810101 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.842900 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9m8m\" (UniqueName: \"kubernetes.io/projected/da1ca3d4-2bae-4133-88c8-b67f74ebc6ab-kube-api-access-l9m8m\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf\" (UID: \"da1ca3d4-2bae-4133-88c8-b67f74ebc6ab\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.843083 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pshpw\" (UniqueName: \"kubernetes.io/projected/c6fcff5b-7bdb-4607-bd4d-389c863ddba6-kube-api-access-pshpw\") pod \"manila-operator-controller-manager-849fcfbb6b-flkdn\" (UID: \"c6fcff5b-7bdb-4607-bd4d-389c863ddba6\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.846772 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.847769 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwc4\" (UniqueName: \"kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4\") pod \"nova-operator-controller-manager-5b4fc7b894-55w8b\" (UID: \"c130e1aa-1f05-45ff-8364-714b79fa7282\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.867548 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9m8m\" (UniqueName: \"kubernetes.io/projected/da1ca3d4-2bae-4133-88c8-b67f74ebc6ab-kube-api-access-l9m8m\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf\" (UID: \"da1ca3d4-2bae-4133-88c8-b67f74ebc6ab\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.877835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pshpw\" (UniqueName: \"kubernetes.io/projected/c6fcff5b-7bdb-4607-bd4d-389c863ddba6-kube-api-access-pshpw\") pod \"manila-operator-controller-manager-849fcfbb6b-flkdn\" (UID: \"c6fcff5b-7bdb-4607-bd4d-389c863ddba6\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.917799 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.923467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.923521 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.927630 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.929420 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.929446 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-q4r9v" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.938449 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.939995 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.942366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7wwlz" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.957116 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2trgc\" (UniqueName: \"kubernetes.io/projected/4f086064-ee5a-47cf-bf97-fc2a423d8c33-kube-api-access-2trgc\") pod \"octavia-operator-controller-manager-756f86fc74-mvr7p\" (UID: \"4f086064-ee5a-47cf-bf97-fc2a423d8c33\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.957926 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psc2m\" (UniqueName: \"kubernetes.io/projected/b934b85e-14a0-4ad2-bdc0-82280eb346a9-kube-api-access-psc2m\") pod \"neutron-operator-controller-manager-7ffd8d76d4-vxk8h\" (UID: \"b934b85e-14a0-4ad2-bdc0-82280eb346a9\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.958118 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwc4\" (UniqueName: \"kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4\") pod \"nova-operator-controller-manager-5b4fc7b894-55w8b\" (UID: \"c130e1aa-1f05-45ff-8364-714b79fa7282\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.961249 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.962011 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.969427 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.979934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwc4\" (UniqueName: \"kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4\") pod \"nova-operator-controller-manager-5b4fc7b894-55w8b\" (UID: \"c130e1aa-1f05-45ff-8364-714b79fa7282\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.980131 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.981393 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.984329 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vbwhf" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.987650 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg"] Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.989568 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:13 crc kubenswrapper[4793]: I0126 22:57:13.992084 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-c4zkj" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.008301 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.035927 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.069355 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.070539 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2trgc\" (UniqueName: \"kubernetes.io/projected/4f086064-ee5a-47cf-bf97-fc2a423d8c33-kube-api-access-2trgc\") pod \"octavia-operator-controller-manager-756f86fc74-mvr7p\" (UID: \"4f086064-ee5a-47cf-bf97-fc2a423d8c33\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.070577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psc2m\" (UniqueName: \"kubernetes.io/projected/b934b85e-14a0-4ad2-bdc0-82280eb346a9-kube-api-access-psc2m\") pod \"neutron-operator-controller-manager-7ffd8d76d4-vxk8h\" (UID: \"b934b85e-14a0-4ad2-bdc0-82280eb346a9\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.070623 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwkx\" (UniqueName: \"kubernetes.io/projected/86ca275a-4c50-479e-9ed5-4e03dda309cf-kube-api-access-9pwkx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.070650 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.070702 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcmmm\" (UniqueName: \"kubernetes.io/projected/6e0878fd-53dc-4048-9267-2520c5919067-kube-api-access-bcmmm\") pod \"ovn-operator-controller-manager-6f75f45d54-2jn8d\" (UID: \"6e0878fd-53dc-4048-9267-2520c5919067\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.089157 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.095282 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psc2m\" (UniqueName: \"kubernetes.io/projected/b934b85e-14a0-4ad2-bdc0-82280eb346a9-kube-api-access-psc2m\") pod \"neutron-operator-controller-manager-7ffd8d76d4-vxk8h\" (UID: \"b934b85e-14a0-4ad2-bdc0-82280eb346a9\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.098012 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.100097 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.102629 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2trgc\" (UniqueName: \"kubernetes.io/projected/4f086064-ee5a-47cf-bf97-fc2a423d8c33-kube-api-access-2trgc\") pod \"octavia-operator-controller-manager-756f86fc74-mvr7p\" (UID: \"4f086064-ee5a-47cf-bf97-fc2a423d8c33\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.104761 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-pdhml" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.111561 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.120424 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.135418 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.153952 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.155161 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.160753 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-b5wd7" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171450 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cxj\" (UniqueName: \"kubernetes.io/projected/06401d29-ca17-43df-865a-26523cf84f67-kube-api-access-z8cxj\") pod \"swift-operator-controller-manager-547cbdb99f-rrmcg\" (UID: \"06401d29-ca17-43df-865a-26523cf84f67\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171537 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pwkx\" (UniqueName: \"kubernetes.io/projected/86ca275a-4c50-479e-9ed5-4e03dda309cf-kube-api-access-9pwkx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171566 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171589 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrf8j\" (UniqueName: \"kubernetes.io/projected/7968980a-764c-4cd2-b77f-ed33fffbd294-kube-api-access-jrf8j\") pod \"placement-operator-controller-manager-79d5ccc684-bp8k8\" (UID: \"7968980a-764c-4cd2-b77f-ed33fffbd294\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.171650 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcmmm\" (UniqueName: \"kubernetes.io/projected/6e0878fd-53dc-4048-9267-2520c5919067-kube-api-access-bcmmm\") pod \"ovn-operator-controller-manager-6f75f45d54-2jn8d\" (UID: \"6e0878fd-53dc-4048-9267-2520c5919067\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.172179 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.172250 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:14.672235826 +0000 UTC m=+1049.661007338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.172425 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.172446 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert podName:eb8ed64e-38aa-4e1f-be80-29be415125fd nodeName:}" failed. No retries permitted until 2026-01-26 22:57:15.172439142 +0000 UTC m=+1050.161210654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert") pod "infra-operator-controller-manager-7d75bc88d5-8zm8h" (UID: "eb8ed64e-38aa-4e1f-be80-29be415125fd") : secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.180106 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.197672 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pwkx\" (UniqueName: \"kubernetes.io/projected/86ca275a-4c50-479e-9ed5-4e03dda309cf-kube-api-access-9pwkx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.201661 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcmmm\" (UniqueName: \"kubernetes.io/projected/6e0878fd-53dc-4048-9267-2520c5919067-kube-api-access-bcmmm\") pod \"ovn-operator-controller-manager-6f75f45d54-2jn8d\" (UID: \"6e0878fd-53dc-4048-9267-2520c5919067\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.219137 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.220492 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.240789 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-79fv7" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.244730 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.274029 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8cxj\" (UniqueName: \"kubernetes.io/projected/06401d29-ca17-43df-865a-26523cf84f67-kube-api-access-z8cxj\") pod \"swift-operator-controller-manager-547cbdb99f-rrmcg\" (UID: \"06401d29-ca17-43df-865a-26523cf84f67\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.274365 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz8k8\" (UniqueName: \"kubernetes.io/projected/65c6d17b-b112-421c-a686-ce1601f91181-kube-api-access-sz8k8\") pod \"telemetry-operator-controller-manager-799bc87c89-j292m\" (UID: \"65c6d17b-b112-421c-a686-ce1601f91181\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.274491 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5gv\" (UniqueName: \"kubernetes.io/projected/6e38b64f-cd6a-4cac-a125-be388bb0dc78-kube-api-access-6l5gv\") pod \"test-operator-controller-manager-69797bbcbd-sgcrb\" (UID: \"6e38b64f-cd6a-4cac-a125-be388bb0dc78\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.278538 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrf8j\" (UniqueName: \"kubernetes.io/projected/7968980a-764c-4cd2-b77f-ed33fffbd294-kube-api-access-jrf8j\") pod \"placement-operator-controller-manager-79d5ccc684-bp8k8\" (UID: \"7968980a-764c-4cd2-b77f-ed33fffbd294\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.309065 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.327209 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8cxj\" (UniqueName: \"kubernetes.io/projected/06401d29-ca17-43df-865a-26523cf84f67-kube-api-access-z8cxj\") pod \"swift-operator-controller-manager-547cbdb99f-rrmcg\" (UID: \"06401d29-ca17-43df-865a-26523cf84f67\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.327119 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrf8j\" (UniqueName: \"kubernetes.io/projected/7968980a-764c-4cd2-b77f-ed33fffbd294-kube-api-access-jrf8j\") pod \"placement-operator-controller-manager-79d5ccc684-bp8k8\" (UID: \"7968980a-764c-4cd2-b77f-ed33fffbd294\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.336838 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.344798 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.346737 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.349096 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.349685 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.350404 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lmwx4" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.362392 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.370743 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.375761 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.377403 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.380407 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hhsw8" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.381014 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5gv\" (UniqueName: \"kubernetes.io/projected/6e38b64f-cd6a-4cac-a125-be388bb0dc78-kube-api-access-6l5gv\") pod \"test-operator-controller-manager-69797bbcbd-sgcrb\" (UID: \"6e38b64f-cd6a-4cac-a125-be388bb0dc78\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.381132 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7mf\" (UniqueName: \"kubernetes.io/projected/d0e46999-98e6-40a6-99a2-f4c01dad0f81-kube-api-access-pq7mf\") pod \"watcher-operator-controller-manager-cb484894b-qxf7r\" (UID: \"d0e46999-98e6-40a6-99a2-f4c01dad0f81\") " pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.381167 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz8k8\" (UniqueName: \"kubernetes.io/projected/65c6d17b-b112-421c-a686-ce1601f91181-kube-api-access-sz8k8\") pod \"telemetry-operator-controller-manager-799bc87c89-j292m\" (UID: \"65c6d17b-b112-421c-a686-ce1601f91181\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.384326 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.412116 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz8k8\" (UniqueName: \"kubernetes.io/projected/65c6d17b-b112-421c-a686-ce1601f91181-kube-api-access-sz8k8\") pod \"telemetry-operator-controller-manager-799bc87c89-j292m\" (UID: \"65c6d17b-b112-421c-a686-ce1601f91181\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.449582 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.455381 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5gv\" (UniqueName: \"kubernetes.io/projected/6e38b64f-cd6a-4cac-a125-be388bb0dc78-kube-api-access-6l5gv\") pod \"test-operator-controller-manager-69797bbcbd-sgcrb\" (UID: \"6e38b64f-cd6a-4cac-a125-be388bb0dc78\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.486965 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.500581 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.501278 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-542qx"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.521134 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgnn4\" (UniqueName: \"kubernetes.io/projected/d4c8a5bf-4e12-46b4-9d2a-1a098416e90a-kube-api-access-mgnn4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vzmsv\" (UID: \"d4c8a5bf-4e12-46b4-9d2a-1a098416e90a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.521210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.521275 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzhdf\" (UniqueName: \"kubernetes.io/projected/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-kube-api-access-mzhdf\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.521361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7mf\" (UniqueName: \"kubernetes.io/projected/d0e46999-98e6-40a6-99a2-f4c01dad0f81-kube-api-access-pq7mf\") pod \"watcher-operator-controller-manager-cb484894b-qxf7r\" (UID: \"d0e46999-98e6-40a6-99a2-f4c01dad0f81\") " pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.521484 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.548944 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7mf\" (UniqueName: \"kubernetes.io/projected/d0e46999-98e6-40a6-99a2-f4c01dad0f81-kube-api-access-pq7mf\") pod \"watcher-operator-controller-manager-cb484894b-qxf7r\" (UID: \"d0e46999-98e6-40a6-99a2-f4c01dad0f81\") " pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.557057 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.622378 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgnn4\" (UniqueName: \"kubernetes.io/projected/d4c8a5bf-4e12-46b4-9d2a-1a098416e90a-kube-api-access-mgnn4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vzmsv\" (UID: \"d4c8a5bf-4e12-46b4-9d2a-1a098416e90a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.622427 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.622470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzhdf\" (UniqueName: \"kubernetes.io/projected/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-kube-api-access-mzhdf\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.622528 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.622670 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.622721 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:15.122704455 +0000 UTC m=+1050.111475967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.623092 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.623121 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:15.123114347 +0000 UTC m=+1050.111885859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "metrics-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.651092 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.659295 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzhdf\" (UniqueName: \"kubernetes.io/projected/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-kube-api-access-mzhdf\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.659824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgnn4\" (UniqueName: \"kubernetes.io/projected/d4c8a5bf-4e12-46b4-9d2a-1a098416e90a-kube-api-access-mgnn4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vzmsv\" (UID: \"d4c8a5bf-4e12-46b4-9d2a-1a098416e90a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.678558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" event={"ID":"58fa089e-6ccd-4521-88ff-e65e6928b738","Type":"ContainerStarted","Data":"7aa39c08a6f64dc91cd340bd456e5890a6b0398de38cd4767da6c517854781d4"} Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.690613 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" event={"ID":"71c7d600-2dac-4e19-a33f-0311a8342774","Type":"ContainerStarted","Data":"e81fc8d96a20c225f4515a3f343ede412ae28484e9ca162a4b0cca92fc969977"} Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.699149 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" event={"ID":"4190249d-f34c-4b1a-a4da-038f7a806fc6","Type":"ContainerStarted","Data":"62909fd9103571eb2404966c205ae5e93c923ef4c53d9984df76e142827191ec"} Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.724146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.724453 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: E0126 22:57:14.724515 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:15.72449666 +0000 UTC m=+1050.713268162 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.732083 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.776429 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.786043 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.804496 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt"] Jan 26 22:57:14 crc kubenswrapper[4793]: W0126 22:57:14.834805 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42a387a4_faad_41fe_bfa9_0f600a06e6e0.slice/crio-7ee3750efcba611d6cdc0d3756ebb536c9635fff9cf83b86e22a6dd460b8417c WatchSource:0}: Error finding container 7ee3750efcba611d6cdc0d3756ebb536c9635fff9cf83b86e22a6dd460b8417c: Status 404 returned error can't find the container with id 7ee3750efcba611d6cdc0d3756ebb536c9635fff9cf83b86e22a6dd460b8417c Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.921855 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf"] Jan 26 22:57:14 crc kubenswrapper[4793]: W0126 22:57:14.938580 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda1ca3d4_2bae_4133_88c8_b67f74ebc6ab.slice/crio-904901b71f9e54ad3580256c34ece857b3a93456ca6bec43786a16b2488f7571 WatchSource:0}: Error finding container 904901b71f9e54ad3580256c34ece857b3a93456ca6bec43786a16b2488f7571: Status 404 returned error can't find the container with id 904901b71f9e54ad3580256c34ece857b3a93456ca6bec43786a16b2488f7571 Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.959367 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.965396 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.969558 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h"] Jan 26 22:57:14 crc kubenswrapper[4793]: I0126 22:57:14.972433 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87"] Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.065808 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d"] Jan 26 22:57:15 crc kubenswrapper[4793]: W0126 22:57:15.073540 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f086064_ee5a_47cf_bf97_fc2a423d8c33.slice/crio-d57ea2b5b16d68b55061eeeface30a6bcce65b771c4873a23932db5e52c054ba WatchSource:0}: Error finding container d57ea2b5b16d68b55061eeeface30a6bcce65b771c4873a23932db5e52c054ba: Status 404 returned error can't find the container with id d57ea2b5b16d68b55061eeeface30a6bcce65b771c4873a23932db5e52c054ba Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.080378 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.081760 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcmmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-2jn8d_openstack-operators(6e0878fd-53dc-4048-9267-2520c5919067): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.083307 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" podUID="6e0878fd-53dc-4048-9267-2520c5919067" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.091424 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p"] Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.138349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.138474 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.138643 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.138736 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.139174 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:16.138733667 +0000 UTC m=+1051.127505189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "metrics-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.139223 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:16.13920913 +0000 UTC m=+1051.127980652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.239098 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.239474 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.239589 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert podName:eb8ed64e-38aa-4e1f-be80-29be415125fd nodeName:}" failed. No retries permitted until 2026-01-26 22:57:17.239560354 +0000 UTC m=+1052.228331866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert") pod "infra-operator-controller-manager-7d75bc88d5-8zm8h" (UID: "eb8ed64e-38aa-4e1f-be80-29be415125fd") : secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.254893 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8"] Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.279598 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb"] Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.281856 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgnn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vzmsv_openstack-operators(d4c8a5bf-4e12-46b4-9d2a-1a098416e90a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.282210 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6l5gv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-sgcrb_openstack-operators(6e38b64f-cd6a-4cac-a125-be388bb0dc78): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.283345 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" podUID="6e38b64f-cd6a-4cac-a125-be388bb0dc78" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.283390 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" podUID="d4c8a5bf-4e12-46b4-9d2a-1a098416e90a" Jan 26 22:57:15 crc kubenswrapper[4793]: W0126 22:57:15.318437 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06401d29_ca17_43df_865a_26523cf84f67.slice/crio-9f7807fb8dbbb9408e1f0e1a2fccbbf80071f9b2b00e8d964cbc45986ec88b5e WatchSource:0}: Error finding container 9f7807fb8dbbb9408e1f0e1a2fccbbf80071f9b2b00e8d964cbc45986ec88b5e: Status 404 returned error can't find the container with id 9f7807fb8dbbb9408e1f0e1a2fccbbf80071f9b2b00e8d964cbc45986ec88b5e Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.323263 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv"] Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.332517 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg"] Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.354212 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8cxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-rrmcg_openstack-operators(06401d29-ca17-43df-865a-26523cf84f67): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.355969 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" podUID="06401d29-ca17-43df-865a-26523cf84f67" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.382533 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m"] Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.421501 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sz8k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-799bc87c89-j292m_openstack-operators(65c6d17b-b112-421c-a686-ce1601f91181): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.422616 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" podUID="65c6d17b-b112-421c-a686-ce1601f91181" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.442153 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r"] Jan 26 22:57:15 crc kubenswrapper[4793]: W0126 22:57:15.504920 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0e46999_98e6_40a6_99a2_f4c01dad0f81.slice/crio-7fd8fc05d46dd65b196e18f36564f0f7da58ed336597f692dc7ba1a4d7f2e102 WatchSource:0}: Error finding container 7fd8fc05d46dd65b196e18f36564f0f7da58ed336597f692dc7ba1a4d7f2e102: Status 404 returned error can't find the container with id 7fd8fc05d46dd65b196e18f36564f0f7da58ed336597f692dc7ba1a4d7f2e102 Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.710348 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" event={"ID":"4f086064-ee5a-47cf-bf97-fc2a423d8c33","Type":"ContainerStarted","Data":"d57ea2b5b16d68b55061eeeface30a6bcce65b771c4873a23932db5e52c054ba"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.712603 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" event={"ID":"22d5cae5-26fb-4a47-97ac-78ba7120d29c","Type":"ContainerStarted","Data":"aad8872fe9b6f6e54005edae2bb793fe822303c6a00529fb0303108932a5990e"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.715540 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" event={"ID":"6e0878fd-53dc-4048-9267-2520c5919067","Type":"ContainerStarted","Data":"912f0e8341d91ea6b380c8c5da5824c20630ef96d17f00e7e297cd969c60a67c"} Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.717458 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" podUID="6e0878fd-53dc-4048-9267-2520c5919067" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.718176 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" event={"ID":"18b41715-6c23-4821-b0da-1c17b5010375","Type":"ContainerStarted","Data":"fb8448305486298a4f9d180cc5253e59129dbd2fb0eb81d07e83eb4ed1a23555"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.720810 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" event={"ID":"06401d29-ca17-43df-865a-26523cf84f67","Type":"ContainerStarted","Data":"9f7807fb8dbbb9408e1f0e1a2fccbbf80071f9b2b00e8d964cbc45986ec88b5e"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.723715 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" event={"ID":"65c6d17b-b112-421c-a686-ce1601f91181","Type":"ContainerStarted","Data":"9ccb72589df362943a49aebbb3d7dd9e7d7570991846123d354b80abb9c7f1e0"} Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.724943 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" podUID="06401d29-ca17-43df-865a-26523cf84f67" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.726748 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" event={"ID":"c6fcff5b-7bdb-4607-bd4d-389c863ddba6","Type":"ContainerStarted","Data":"4281698cf49ced2dc828bc1eb747adadc844a720239fabf8cda8679ece77b04a"} Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.726939 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" podUID="65c6d17b-b112-421c-a686-ce1601f91181" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.729935 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" event={"ID":"b934b85e-14a0-4ad2-bdc0-82280eb346a9","Type":"ContainerStarted","Data":"eacfa9d0c65e16f680ab10bd9838468514e0c1def06cbf29f59bd743f14309aa"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.734286 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" event={"ID":"bcf4192a-ff9b-4b02-83d0-a17ecd3ba795","Type":"ContainerStarted","Data":"08483100df0e7db2d370aff8f79acfa633ee461ac6bb956b00cd0c5e5ec602a1"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.743532 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" event={"ID":"d0e46999-98e6-40a6-99a2-f4c01dad0f81","Type":"ContainerStarted","Data":"7fd8fc05d46dd65b196e18f36564f0f7da58ed336597f692dc7ba1a4d7f2e102"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.748064 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" event={"ID":"da1ca3d4-2bae-4133-88c8-b67f74ebc6ab","Type":"ContainerStarted","Data":"904901b71f9e54ad3580256c34ece857b3a93456ca6bec43786a16b2488f7571"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.752148 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" event={"ID":"7968980a-764c-4cd2-b77f-ed33fffbd294","Type":"ContainerStarted","Data":"9d844a09321bb3df6b7dcd6f224c5e14d22b6f10f0e4177663e8cbe590495724"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.754642 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" event={"ID":"d4c8a5bf-4e12-46b4-9d2a-1a098416e90a","Type":"ContainerStarted","Data":"2eaa3af567b72422d79f4fe366a3c7eded6a6b9de3cafe4f58d4e8e759941159"} Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.762630 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" podUID="d4c8a5bf-4e12-46b4-9d2a-1a098416e90a" Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.770624 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.770816 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.771675 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:17.771645835 +0000 UTC m=+1052.760417347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.801401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" event={"ID":"c130e1aa-1f05-45ff-8364-714b79fa7282","Type":"ContainerStarted","Data":"a5c867b9927299be310ccbc0cb616dae9ad68f5d7f00ed7bfa18b82b1aeee089"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.801447 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" event={"ID":"42a387a4-faad-41fe-bfa9-0f600a06e6e0","Type":"ContainerStarted","Data":"7ee3750efcba611d6cdc0d3756ebb536c9635fff9cf83b86e22a6dd460b8417c"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.801471 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" event={"ID":"6e38b64f-cd6a-4cac-a125-be388bb0dc78","Type":"ContainerStarted","Data":"551b85009d2cd9d2b417dfef21561c43143061397f7429c4ffe6e9320d310629"} Jan 26 22:57:15 crc kubenswrapper[4793]: I0126 22:57:15.801483 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" event={"ID":"f15117fc-93f9-498b-b831-e87094aa991e","Type":"ContainerStarted","Data":"4817eef049e4621cc079b9edc2d5b6c0576f114e00e73638c7a2cff89397d768"} Jan 26 22:57:15 crc kubenswrapper[4793]: E0126 22:57:15.817402 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" podUID="6e38b64f-cd6a-4cac-a125-be388bb0dc78" Jan 26 22:57:16 crc kubenswrapper[4793]: I0126 22:57:16.184931 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:16 crc kubenswrapper[4793]: I0126 22:57:16.185043 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.185252 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.185317 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:18.185295775 +0000 UTC m=+1053.174067287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "metrics-server-cert" not found Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.185713 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.185741 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:18.185731747 +0000 UTC m=+1053.174503259 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.821624 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" podUID="65c6d17b-b112-421c-a686-ce1601f91181" Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.822002 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" podUID="6e38b64f-cd6a-4cac-a125-be388bb0dc78" Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.822053 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" podUID="06401d29-ca17-43df-865a-26523cf84f67" Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.822093 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" podUID="d4c8a5bf-4e12-46b4-9d2a-1a098416e90a" Jan 26 22:57:16 crc kubenswrapper[4793]: E0126 22:57:16.825397 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" podUID="6e0878fd-53dc-4048-9267-2520c5919067" Jan 26 22:57:17 crc kubenswrapper[4793]: I0126 22:57:17.319731 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:17 crc kubenswrapper[4793]: E0126 22:57:17.319995 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:17 crc kubenswrapper[4793]: E0126 22:57:17.320053 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert podName:eb8ed64e-38aa-4e1f-be80-29be415125fd nodeName:}" failed. No retries permitted until 2026-01-26 22:57:21.320033146 +0000 UTC m=+1056.308804658 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert") pod "infra-operator-controller-manager-7d75bc88d5-8zm8h" (UID: "eb8ed64e-38aa-4e1f-be80-29be415125fd") : secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:17 crc kubenswrapper[4793]: I0126 22:57:17.832349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:17 crc kubenswrapper[4793]: E0126 22:57:17.832538 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:17 crc kubenswrapper[4793]: E0126 22:57:17.832915 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:21.832897389 +0000 UTC m=+1056.821668891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:18 crc kubenswrapper[4793]: I0126 22:57:18.249057 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:18 crc kubenswrapper[4793]: I0126 22:57:18.249242 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:18 crc kubenswrapper[4793]: E0126 22:57:18.249557 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 22:57:18 crc kubenswrapper[4793]: E0126 22:57:18.249644 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:22.249619385 +0000 UTC m=+1057.238390887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "metrics-server-cert" not found Jan 26 22:57:18 crc kubenswrapper[4793]: E0126 22:57:18.250269 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:18 crc kubenswrapper[4793]: E0126 22:57:18.250334 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:22.250298284 +0000 UTC m=+1057.239069796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:21 crc kubenswrapper[4793]: I0126 22:57:21.409868 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:21 crc kubenswrapper[4793]: E0126 22:57:21.410278 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:21 crc kubenswrapper[4793]: E0126 22:57:21.410589 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert podName:eb8ed64e-38aa-4e1f-be80-29be415125fd nodeName:}" failed. No retries permitted until 2026-01-26 22:57:29.410543072 +0000 UTC m=+1064.399314624 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert") pod "infra-operator-controller-manager-7d75bc88d5-8zm8h" (UID: "eb8ed64e-38aa-4e1f-be80-29be415125fd") : secret "infra-operator-webhook-server-cert" not found Jan 26 22:57:21 crc kubenswrapper[4793]: I0126 22:57:21.919943 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:21 crc kubenswrapper[4793]: E0126 22:57:21.920437 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:21 crc kubenswrapper[4793]: E0126 22:57:21.920483 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:29.920466443 +0000 UTC m=+1064.909237955 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:22 crc kubenswrapper[4793]: I0126 22:57:22.327350 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:22 crc kubenswrapper[4793]: I0126 22:57:22.327577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:22 crc kubenswrapper[4793]: E0126 22:57:22.327615 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 22:57:22 crc kubenswrapper[4793]: E0126 22:57:22.327753 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:30.327715614 +0000 UTC m=+1065.316487166 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "metrics-server-cert" not found Jan 26 22:57:22 crc kubenswrapper[4793]: E0126 22:57:22.327832 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:22 crc kubenswrapper[4793]: E0126 22:57:22.327930 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:30.32789858 +0000 UTC m=+1065.316670132 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:27 crc kubenswrapper[4793]: E0126 22:57:27.847353 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591" Jan 26 22:57:27 crc kubenswrapper[4793]: E0126 22:57:27.848645 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68tjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-655bf9cfbb-dgk2z_openstack-operators(58fa089e-6ccd-4521-88ff-e65e6928b738): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:27 crc kubenswrapper[4793]: E0126 22:57:27.850086 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" podUID="58fa089e-6ccd-4521-88ff-e65e6928b738" Jan 26 22:57:27 crc kubenswrapper[4793]: E0126 22:57:27.909049 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" podUID="58fa089e-6ccd-4521-88ff-e65e6928b738" Jan 26 22:57:28 crc kubenswrapper[4793]: E0126 22:57:28.343920 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:7c854adeefcf36df519c8d48e1cfc8ca8959c5023083f2b923ba50134c4dfa22" Jan 26 22:57:28 crc kubenswrapper[4793]: E0126 22:57:28.344117 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:7c854adeefcf36df519c8d48e1cfc8ca8959c5023083f2b923ba50134c4dfa22,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2trgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-756f86fc74-mvr7p_openstack-operators(4f086064-ee5a-47cf-bf97-fc2a423d8c33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:28 crc kubenswrapper[4793]: E0126 22:57:28.345285 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" podUID="4f086064-ee5a-47cf-bf97-fc2a423d8c33" Jan 26 22:57:28 crc kubenswrapper[4793]: E0126 22:57:28.915441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:7c854adeefcf36df519c8d48e1cfc8ca8959c5023083f2b923ba50134c4dfa22\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" podUID="4f086064-ee5a-47cf-bf97-fc2a423d8c33" Jan 26 22:57:29 crc kubenswrapper[4793]: I0126 22:57:29.448089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:29 crc kubenswrapper[4793]: I0126 22:57:29.457520 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8ed64e-38aa-4e1f-be80-29be415125fd-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8zm8h\" (UID: \"eb8ed64e-38aa-4e1f-be80-29be415125fd\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:29 crc kubenswrapper[4793]: I0126 22:57:29.725337 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.836439 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.836645 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrf8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-bp8k8_openstack-operators(7968980a-764c-4cd2-b77f-ed33fffbd294): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.838570 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" podUID="7968980a-764c-4cd2-b77f-ed33fffbd294" Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.923602 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" podUID="7968980a-764c-4cd2-b77f-ed33fffbd294" Jan 26 22:57:29 crc kubenswrapper[4793]: I0126 22:57:29.955601 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.955776 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:29 crc kubenswrapper[4793]: E0126 22:57:29.955873 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert podName:86ca275a-4c50-479e-9ed5-4e03dda309cf nodeName:}" failed. No retries permitted until 2026-01-26 22:57:45.955855415 +0000 UTC m=+1080.944626927 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" (UID: "86ca275a-4c50-479e-9ed5-4e03dda309cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 22:57:30 crc kubenswrapper[4793]: I0126 22:57:30.361025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:30 crc kubenswrapper[4793]: I0126 22:57:30.361121 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.361358 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.361423 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs podName:b1cd2fec-ba5b-4984-90c5-565df4ef5cd1 nodeName:}" failed. No retries permitted until 2026-01-26 22:57:46.361406269 +0000 UTC m=+1081.350177781 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs") pod "openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" (UID: "b1cd2fec-ba5b-4984-90c5-565df4ef5cd1") : secret "webhook-server-cert" not found Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.364788 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.364986 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psc2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-vxk8h_openstack-operators(b934b85e-14a0-4ad2-bdc0-82280eb346a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.366479 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" podUID="b934b85e-14a0-4ad2-bdc0-82280eb346a9" Jan 26 22:57:30 crc kubenswrapper[4793]: I0126 22:57:30.369407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-metrics-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.926203 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" podUID="b934b85e-14a0-4ad2-bdc0-82280eb346a9" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.977985 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/watcher-operator@sha256:53ed5a94d4b8864152053b8d3c3ac0765d178c4db033fefe0ca8dab887d74922" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.978236 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:53ed5a94d4b8864152053b8d3c3ac0765d178c4db033fefe0ca8dab887d74922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pq7mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-cb484894b-qxf7r_openstack-operators(d0e46999-98e6-40a6-99a2-f4c01dad0f81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:30 crc kubenswrapper[4793]: E0126 22:57:30.979445 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" podUID="d0e46999-98e6-40a6-99a2-f4c01dad0f81" Jan 26 22:57:31 crc kubenswrapper[4793]: E0126 22:57:31.472641 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/glance-operator@sha256:bc45409dff26aca6bd982684cfaf093548adb6a71928f5257fe60ab5535dda39" Jan 26 22:57:31 crc kubenswrapper[4793]: E0126 22:57:31.473009 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:bc45409dff26aca6bd982684cfaf093548adb6a71928f5257fe60ab5535dda39,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vg46x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-67dd55ff59-cpwtx_openstack-operators(f15117fc-93f9-498b-b831-e87094aa991e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:31 crc kubenswrapper[4793]: E0126 22:57:31.475700 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" podUID="f15117fc-93f9-498b-b831-e87094aa991e" Jan 26 22:57:31 crc kubenswrapper[4793]: E0126 22:57:31.943534 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:bc45409dff26aca6bd982684cfaf093548adb6a71928f5257fe60ab5535dda39\\\"\"" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" podUID="f15117fc-93f9-498b-b831-e87094aa991e" Jan 26 22:57:31 crc kubenswrapper[4793]: E0126 22:57:31.944500 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:53ed5a94d4b8864152053b8d3c3ac0765d178c4db033fefe0ca8dab887d74922\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" podUID="d0e46999-98e6-40a6-99a2-f4c01dad0f81" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.297776 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.297978 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4gr5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-vkwjg_openstack-operators(42a387a4-faad-41fe-bfa9-0f600a06e6e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.299148 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" podUID="42a387a4-faad-41fe-bfa9-0f600a06e6e0" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.704952 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.94:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.705024 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.94:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.705180 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.94:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7gwc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5b4fc7b894-55w8b_openstack-operators(c130e1aa-1f05-45ff-8364-714b79fa7282): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.706408 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.958007 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" podUID="42a387a4-faad-41fe-bfa9-0f600a06e6e0" Jan 26 22:57:32 crc kubenswrapper[4793]: E0126 22:57:32.958011 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.94:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" Jan 26 22:57:33 crc kubenswrapper[4793]: E0126 22:57:33.716415 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 26 22:57:33 crc kubenswrapper[4793]: E0126 22:57:33.716651 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rm9g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-g2r87_openstack-operators(22d5cae5-26fb-4a47-97ac-78ba7120d29c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:33 crc kubenswrapper[4793]: E0126 22:57:33.718023 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" podUID="22d5cae5-26fb-4a47-97ac-78ba7120d29c" Jan 26 22:57:33 crc kubenswrapper[4793]: E0126 22:57:33.962534 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" podUID="22d5cae5-26fb-4a47-97ac-78ba7120d29c" Jan 26 22:57:36 crc kubenswrapper[4793]: I0126 22:57:36.605508 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h"] Jan 26 22:57:36 crc kubenswrapper[4793]: W0126 22:57:36.737363 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb8ed64e_38aa_4e1f_be80_29be415125fd.slice/crio-e4ba73d0af77e0ab359943ef706bfabae2051cd682e21088dcd3184cd120b9fb WatchSource:0}: Error finding container e4ba73d0af77e0ab359943ef706bfabae2051cd682e21088dcd3184cd120b9fb: Status 404 returned error can't find the container with id e4ba73d0af77e0ab359943ef706bfabae2051cd682e21088dcd3184cd120b9fb Jan 26 22:57:36 crc kubenswrapper[4793]: I0126 22:57:36.984858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" event={"ID":"eb8ed64e-38aa-4e1f-be80-29be415125fd","Type":"ContainerStarted","Data":"e4ba73d0af77e0ab359943ef706bfabae2051cd682e21088dcd3184cd120b9fb"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.003408 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" event={"ID":"4190249d-f34c-4b1a-a4da-038f7a806fc6","Type":"ContainerStarted","Data":"d23dd0621b5a7068dd3695410d70acb2195903e86c93bef9678f510b2ec7dbc6"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.004084 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.009871 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" event={"ID":"c6fcff5b-7bdb-4607-bd4d-389c863ddba6","Type":"ContainerStarted","Data":"10bbe845aa6d348cff8111285ba4412d36ca4bd1d704e3870f6df3863ebc6a52"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.010037 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.014352 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" event={"ID":"d4c8a5bf-4e12-46b4-9d2a-1a098416e90a","Type":"ContainerStarted","Data":"0e3a573d5ec3ae9ac1e3d451a23e201138e72c5bb7411b28d73189d668ce340c"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.015618 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" event={"ID":"71c7d600-2dac-4e19-a33f-0311a8342774","Type":"ContainerStarted","Data":"1eddbe7ec4299040e28a703a41990d68c421b51be0269f8d09c861c0671c7f2d"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.015740 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.020769 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" event={"ID":"65c6d17b-b112-421c-a686-ce1601f91181","Type":"ContainerStarted","Data":"7dda179822c0c4ef2ccda6fd73361aac7b93007471dec0ebd04560c4d950ff43"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.021569 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.022084 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" podStartSLOduration=5.404335409 podStartE2EDuration="25.022061298s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.572228665 +0000 UTC m=+1049.561000177" lastFinishedPulling="2026-01-26 22:57:34.189954554 +0000 UTC m=+1069.178726066" observedRunningTime="2026-01-26 22:57:38.020406172 +0000 UTC m=+1073.009177684" watchObservedRunningTime="2026-01-26 22:57:38.022061298 +0000 UTC m=+1073.010832810" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.032946 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" event={"ID":"18b41715-6c23-4821-b0da-1c17b5010375","Type":"ContainerStarted","Data":"67bbcfda85cf023096d096cd8c4e64a330e947033fe3d340153753914d282ffb"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.033123 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.034735 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" event={"ID":"6e38b64f-cd6a-4cac-a125-be388bb0dc78","Type":"ContainerStarted","Data":"66975fc3b9768a1cf7af7d52153d7843d6e2f903f97de2d5ace426be44c18384"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.035095 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.036617 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" event={"ID":"06401d29-ca17-43df-865a-26523cf84f67","Type":"ContainerStarted","Data":"605f9fef188bf769f8e8a6e5dccf343a2159623d7f6a69db5e8d6bf4994a49f8"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.036743 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.038768 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" event={"ID":"bcf4192a-ff9b-4b02-83d0-a17ecd3ba795","Type":"ContainerStarted","Data":"c8177ef8bd105ca9f211499c072ed3ddb03a89fad05c91f07dfd61c806710a13"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.038841 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.040000 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" event={"ID":"da1ca3d4-2bae-4133-88c8-b67f74ebc6ab","Type":"ContainerStarted","Data":"1e5082ac017aa7c4c17769c8fd59a4f5b49438d287605a4e025cb0dc2b40c85d"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.040138 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.044216 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" event={"ID":"6e0878fd-53dc-4048-9267-2520c5919067","Type":"ContainerStarted","Data":"20de12f697f8bd17447479eedf5eabbdd4afb8e4511868edcaf9105855390cb1"} Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.044470 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.047734 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" podStartSLOduration=5.841385863 podStartE2EDuration="25.047714815s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.984358392 +0000 UTC m=+1049.973129904" lastFinishedPulling="2026-01-26 22:57:34.190687334 +0000 UTC m=+1069.179458856" observedRunningTime="2026-01-26 22:57:38.041345257 +0000 UTC m=+1073.030116769" watchObservedRunningTime="2026-01-26 22:57:38.047714815 +0000 UTC m=+1073.036486327" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.061477 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vzmsv" podStartSLOduration=2.486638687 podStartE2EDuration="24.061449139s" podCreationTimestamp="2026-01-26 22:57:14 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.281717353 +0000 UTC m=+1050.270488865" lastFinishedPulling="2026-01-26 22:57:36.856527805 +0000 UTC m=+1071.845299317" observedRunningTime="2026-01-26 22:57:38.056561233 +0000 UTC m=+1073.045332745" watchObservedRunningTime="2026-01-26 22:57:38.061449139 +0000 UTC m=+1073.050220651" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.074506 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" podStartSLOduration=5.354683962 podStartE2EDuration="25.074484864s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.469504244 +0000 UTC m=+1049.458275756" lastFinishedPulling="2026-01-26 22:57:34.189305136 +0000 UTC m=+1069.178076658" observedRunningTime="2026-01-26 22:57:38.07041556 +0000 UTC m=+1073.059187072" watchObservedRunningTime="2026-01-26 22:57:38.074484864 +0000 UTC m=+1073.063256376" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.098737 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" podStartSLOduration=3.448130231 podStartE2EDuration="25.098712391s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.08158386 +0000 UTC m=+1050.070355372" lastFinishedPulling="2026-01-26 22:57:36.73216602 +0000 UTC m=+1071.720937532" observedRunningTime="2026-01-26 22:57:38.096647013 +0000 UTC m=+1073.085418535" watchObservedRunningTime="2026-01-26 22:57:38.098712391 +0000 UTC m=+1073.087483903" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.124115 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" podStartSLOduration=5.958200998 podStartE2EDuration="25.12409522s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.02398878 +0000 UTC m=+1050.012760292" lastFinishedPulling="2026-01-26 22:57:34.189883002 +0000 UTC m=+1069.178654514" observedRunningTime="2026-01-26 22:57:38.120692105 +0000 UTC m=+1073.109463617" watchObservedRunningTime="2026-01-26 22:57:38.12409522 +0000 UTC m=+1073.112866732" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.153107 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" podStartSLOduration=4.327154996 podStartE2EDuration="25.15308552s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.354005743 +0000 UTC m=+1050.342777255" lastFinishedPulling="2026-01-26 22:57:36.179936227 +0000 UTC m=+1071.168707779" observedRunningTime="2026-01-26 22:57:38.145528709 +0000 UTC m=+1073.134300221" watchObservedRunningTime="2026-01-26 22:57:38.15308552 +0000 UTC m=+1073.141857032" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.164435 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" podStartSLOduration=3.612857504 podStartE2EDuration="25.164415857s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.282120454 +0000 UTC m=+1050.270891966" lastFinishedPulling="2026-01-26 22:57:36.833678807 +0000 UTC m=+1071.822450319" observedRunningTime="2026-01-26 22:57:38.16165837 +0000 UTC m=+1073.150429882" watchObservedRunningTime="2026-01-26 22:57:38.164415857 +0000 UTC m=+1073.153187369" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.185013 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" podStartSLOduration=5.85377806 podStartE2EDuration="25.184991432s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.857533638 +0000 UTC m=+1049.846305150" lastFinishedPulling="2026-01-26 22:57:34.18874701 +0000 UTC m=+1069.177518522" observedRunningTime="2026-01-26 22:57:38.182451201 +0000 UTC m=+1073.171222723" watchObservedRunningTime="2026-01-26 22:57:38.184991432 +0000 UTC m=+1073.173762934" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.205622 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" podStartSLOduration=5.979867914 podStartE2EDuration="25.205604808s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.964869018 +0000 UTC m=+1049.953640530" lastFinishedPulling="2026-01-26 22:57:34.190605912 +0000 UTC m=+1069.179377424" observedRunningTime="2026-01-26 22:57:38.204051134 +0000 UTC m=+1073.192822636" watchObservedRunningTime="2026-01-26 22:57:38.205604808 +0000 UTC m=+1073.194376320" Jan 26 22:57:38 crc kubenswrapper[4793]: I0126 22:57:38.230835 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" podStartSLOduration=4.508481274 podStartE2EDuration="25.230808772s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.421334965 +0000 UTC m=+1050.410106477" lastFinishedPulling="2026-01-26 22:57:36.143662423 +0000 UTC m=+1071.132433975" observedRunningTime="2026-01-26 22:57:38.226700918 +0000 UTC m=+1073.215472420" watchObservedRunningTime="2026-01-26 22:57:38.230808772 +0000 UTC m=+1073.219580284" Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.057611 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" event={"ID":"58fa089e-6ccd-4521-88ff-e65e6928b738","Type":"ContainerStarted","Data":"533cc1895afce8ac215f7b7bf08d37266ef42da014f243d2fd35892ebf33f9ab"} Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.058227 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.060971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" event={"ID":"eb8ed64e-38aa-4e1f-be80-29be415125fd","Type":"ContainerStarted","Data":"d4d96d7eb5c447de8569cc67ca667162bb497037174de0057d8fcd61148b3c83"} Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.061376 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.074548 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" podStartSLOduration=2.189450854 podStartE2EDuration="27.074527788s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.448517837 +0000 UTC m=+1049.437289349" lastFinishedPulling="2026-01-26 22:57:39.333594781 +0000 UTC m=+1074.322366283" observedRunningTime="2026-01-26 22:57:40.071273647 +0000 UTC m=+1075.060045169" watchObservedRunningTime="2026-01-26 22:57:40.074527788 +0000 UTC m=+1075.063299310" Jan 26 22:57:40 crc kubenswrapper[4793]: I0126 22:57:40.102576 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" podStartSLOduration=24.511396936 podStartE2EDuration="27.10254176s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:36.75007203 +0000 UTC m=+1071.738843532" lastFinishedPulling="2026-01-26 22:57:39.341216844 +0000 UTC m=+1074.329988356" observedRunningTime="2026-01-26 22:57:40.100051971 +0000 UTC m=+1075.088823573" watchObservedRunningTime="2026-01-26 22:57:40.10254176 +0000 UTC m=+1075.091313312" Jan 26 22:57:42 crc kubenswrapper[4793]: I0126 22:57:42.076518 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" event={"ID":"4f086064-ee5a-47cf-bf97-fc2a423d8c33","Type":"ContainerStarted","Data":"c09284cf0b42ba68a80615939cd4ef1c3592469ee39229e0c87aa7e7b9187990"} Jan 26 22:57:42 crc kubenswrapper[4793]: I0126 22:57:42.078314 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:42 crc kubenswrapper[4793]: I0126 22:57:42.100640 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" podStartSLOduration=2.994285356 podStartE2EDuration="29.100617019s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.078939166 +0000 UTC m=+1050.067710678" lastFinishedPulling="2026-01-26 22:57:41.185270829 +0000 UTC m=+1076.174042341" observedRunningTime="2026-01-26 22:57:42.098681215 +0000 UTC m=+1077.087452727" watchObservedRunningTime="2026-01-26 22:57:42.100617019 +0000 UTC m=+1077.089388531" Jan 26 22:57:42 crc kubenswrapper[4793]: I0126 22:57:42.764134 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.088987 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" event={"ID":"b934b85e-14a0-4ad2-bdc0-82280eb346a9","Type":"ContainerStarted","Data":"95a0a1d32f2c55c9f15ac7a52692f4faa1f6c112e1766a57aca010c4df81ef34"} Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.089456 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.116337 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" podStartSLOduration=2.958469386 podStartE2EDuration="30.116304765s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.013875097 +0000 UTC m=+1050.002646609" lastFinishedPulling="2026-01-26 22:57:42.171710486 +0000 UTC m=+1077.160481988" observedRunningTime="2026-01-26 22:57:43.113460655 +0000 UTC m=+1078.102232177" watchObservedRunningTime="2026-01-26 22:57:43.116304765 +0000 UTC m=+1078.105076307" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.684516 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-542qx" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.755080 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vnt9q" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.821921 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-954b94f75-fnjqt" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.927574 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-7twhn" Jan 26 22:57:43 crc kubenswrapper[4793]: I0126 22:57:43.965983 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-flkdn" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.076301 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.101800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" event={"ID":"c130e1aa-1f05-45ff-8364-714b79fa7282","Type":"ContainerStarted","Data":"27aa43d53cac21ae560dc4dfba6c32cef6066e0e37407cd3e0517161f5497f76"} Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.102731 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.107167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" event={"ID":"d0e46999-98e6-40a6-99a2-f4c01dad0f81","Type":"ContainerStarted","Data":"210ee7d659ac2088be642e7c257ab164b002b8a31e1d643f3a3d99dbe380be20"} Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.107532 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.130154 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" podStartSLOduration=2.319216941 podStartE2EDuration="31.130135068s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.076510638 +0000 UTC m=+1050.065282140" lastFinishedPulling="2026-01-26 22:57:43.887428755 +0000 UTC m=+1078.876200267" observedRunningTime="2026-01-26 22:57:44.122126694 +0000 UTC m=+1079.110898216" watchObservedRunningTime="2026-01-26 22:57:44.130135068 +0000 UTC m=+1079.118906580" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.142662 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" podStartSLOduration=2.885131907 podStartE2EDuration="31.142646138s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.527104641 +0000 UTC m=+1050.515876153" lastFinishedPulling="2026-01-26 22:57:43.784618862 +0000 UTC m=+1078.773390384" observedRunningTime="2026-01-26 22:57:44.137041061 +0000 UTC m=+1079.125812583" watchObservedRunningTime="2026-01-26 22:57:44.142646138 +0000 UTC m=+1079.131417650" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.312523 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-2jn8d" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.374149 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-rrmcg" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.489846 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-j292m" Jan 26 22:57:44 crc kubenswrapper[4793]: I0126 22:57:44.502952 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sgcrb" Jan 26 22:57:45 crc kubenswrapper[4793]: I0126 22:57:45.116161 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" event={"ID":"7968980a-764c-4cd2-b77f-ed33fffbd294","Type":"ContainerStarted","Data":"a9d60752a68cd5b97e4b08ed3f0ccb384d441f3a237acd514d2caf36b8252743"} Jan 26 22:57:45 crc kubenswrapper[4793]: I0126 22:57:45.116942 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:45 crc kubenswrapper[4793]: I0126 22:57:45.142681 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" podStartSLOduration=3.1013690289999998 podStartE2EDuration="32.142652315s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.265344885 +0000 UTC m=+1050.254116397" lastFinishedPulling="2026-01-26 22:57:44.306628171 +0000 UTC m=+1079.295399683" observedRunningTime="2026-01-26 22:57:45.138033866 +0000 UTC m=+1080.126805418" watchObservedRunningTime="2026-01-26 22:57:45.142652315 +0000 UTC m=+1080.131423827" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.026618 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.041035 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ca275a-4c50-479e-9ed5-4e03dda309cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4\" (UID: \"86ca275a-4c50-479e-9ed5-4e03dda309cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.052525 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-q4r9v" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.061042 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.133946 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" event={"ID":"f15117fc-93f9-498b-b831-e87094aa991e","Type":"ContainerStarted","Data":"ce3cc98315a3fdff0be513df123c307ce44b07da8527ed704181d7fe7ffe2863"} Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.134731 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.157554 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" podStartSLOduration=2.766079468 podStartE2EDuration="33.157529697s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.847304132 +0000 UTC m=+1049.836075644" lastFinishedPulling="2026-01-26 22:57:45.238754361 +0000 UTC m=+1080.227525873" observedRunningTime="2026-01-26 22:57:46.15299371 +0000 UTC m=+1081.141765222" watchObservedRunningTime="2026-01-26 22:57:46.157529697 +0000 UTC m=+1081.146301249" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.432892 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.439096 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cd2fec-ba5b-4984-90c5-565df4ef5cd1-webhook-certs\") pod \"openstack-operator-controller-manager-5c5bb8bdbb-6lmgq\" (UID: \"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1\") " pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.503857 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lmwx4" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.511097 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.591026 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4"] Jan 26 22:57:46 crc kubenswrapper[4793]: W0126 22:57:46.598674 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86ca275a_4c50_479e_9ed5_4e03dda309cf.slice/crio-5fc0838615414c0ccae7e0dbe23b3cb4f75e949b164bef01876c04c3c2f59bd4 WatchSource:0}: Error finding container 5fc0838615414c0ccae7e0dbe23b3cb4f75e949b164bef01876c04c3c2f59bd4: Status 404 returned error can't find the container with id 5fc0838615414c0ccae7e0dbe23b3cb4f75e949b164bef01876c04c3c2f59bd4 Jan 26 22:57:46 crc kubenswrapper[4793]: W0126 22:57:46.813057 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1cd2fec_ba5b_4984_90c5_565df4ef5cd1.slice/crio-afb0555efe5d877488100f2bc54fc4b59d7aa0f786fec46f34157df5a3fa164d WatchSource:0}: Error finding container afb0555efe5d877488100f2bc54fc4b59d7aa0f786fec46f34157df5a3fa164d: Status 404 returned error can't find the container with id afb0555efe5d877488100f2bc54fc4b59d7aa0f786fec46f34157df5a3fa164d Jan 26 22:57:46 crc kubenswrapper[4793]: I0126 22:57:46.817269 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq"] Jan 26 22:57:47 crc kubenswrapper[4793]: I0126 22:57:47.146750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" event={"ID":"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1","Type":"ContainerStarted","Data":"d6e30b4d05761b2898142b60a945bf5293d9605373383b754290c5a6a17d1427"} Jan 26 22:57:47 crc kubenswrapper[4793]: I0126 22:57:47.147393 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:47 crc kubenswrapper[4793]: I0126 22:57:47.147410 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" event={"ID":"b1cd2fec-ba5b-4984-90c5-565df4ef5cd1","Type":"ContainerStarted","Data":"afb0555efe5d877488100f2bc54fc4b59d7aa0f786fec46f34157df5a3fa164d"} Jan 26 22:57:47 crc kubenswrapper[4793]: I0126 22:57:47.148608 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" event={"ID":"86ca275a-4c50-479e-9ed5-4e03dda309cf","Type":"ContainerStarted","Data":"5fc0838615414c0ccae7e0dbe23b3cb4f75e949b164bef01876c04c3c2f59bd4"} Jan 26 22:57:47 crc kubenswrapper[4793]: I0126 22:57:47.188623 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" podStartSLOduration=33.188569452 podStartE2EDuration="33.188569452s" podCreationTimestamp="2026-01-26 22:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:57:47.183973663 +0000 UTC m=+1082.172745205" watchObservedRunningTime="2026-01-26 22:57:47.188569452 +0000 UTC m=+1082.177340974" Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.161713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" event={"ID":"22d5cae5-26fb-4a47-97ac-78ba7120d29c","Type":"ContainerStarted","Data":"6603f6dcec8adfa1e758ca8200f60813a477761f23272ac80eb0016d47738eec"} Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.162414 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.164631 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" event={"ID":"42a387a4-faad-41fe-bfa9-0f600a06e6e0","Type":"ContainerStarted","Data":"989a1d347dded3eb96f4bfc83b03b39fcbc70258d523f7299cc728d9cda48121"} Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.164944 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.189403 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" podStartSLOduration=3.005734047 podStartE2EDuration="35.189382761s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:15.019367961 +0000 UTC m=+1050.008139473" lastFinishedPulling="2026-01-26 22:57:47.203016665 +0000 UTC m=+1082.191788187" observedRunningTime="2026-01-26 22:57:48.187140449 +0000 UTC m=+1083.175912001" watchObservedRunningTime="2026-01-26 22:57:48.189382761 +0000 UTC m=+1083.178154273" Jan 26 22:57:48 crc kubenswrapper[4793]: I0126 22:57:48.214609 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" podStartSLOduration=2.854735186 podStartE2EDuration="35.214576535s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:14.840833671 +0000 UTC m=+1049.829605183" lastFinishedPulling="2026-01-26 22:57:47.20067502 +0000 UTC m=+1082.189446532" observedRunningTime="2026-01-26 22:57:48.209336749 +0000 UTC m=+1083.198108261" watchObservedRunningTime="2026-01-26 22:57:48.214576535 +0000 UTC m=+1083.203348047" Jan 26 22:57:49 crc kubenswrapper[4793]: I0126 22:57:49.732869 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8zm8h" Jan 26 22:57:53 crc kubenswrapper[4793]: I0126 22:57:53.725938 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-dgk2z" Jan 26 22:57:53 crc kubenswrapper[4793]: I0126 22:57:53.782513 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cpwtx" Jan 26 22:57:53 crc kubenswrapper[4793]: I0126 22:57:53.807115 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkwjg" Jan 26 22:57:53 crc kubenswrapper[4793]: I0126 22:57:53.931237 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-g2r87" Jan 26 22:57:54 crc kubenswrapper[4793]: I0126 22:57:54.092433 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 22:57:54 crc kubenswrapper[4793]: I0126 22:57:54.125227 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vxk8h" Jan 26 22:57:54 crc kubenswrapper[4793]: I0126 22:57:54.139288 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-mvr7p" Jan 26 22:57:54 crc kubenswrapper[4793]: I0126 22:57:54.340577 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-bp8k8" Jan 26 22:57:54 crc kubenswrapper[4793]: I0126 22:57:54.654884 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-cb484894b-qxf7r" Jan 26 22:57:56 crc kubenswrapper[4793]: I0126 22:57:56.523058 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5c5bb8bdbb-6lmgq" Jan 26 22:57:59 crc kubenswrapper[4793]: E0126 22:57:59.394104 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22" Jan 26 22:57:59 crc kubenswrapper[4793]: E0126 22:57:59.395491 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pwkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4_openstack-operators(86ca275a-4c50-479e-9ed5-4e03dda309cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 22:57:59 crc kubenswrapper[4793]: E0126 22:57:59.396910 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" podUID="86ca275a-4c50-479e-9ed5-4e03dda309cf" Jan 26 22:58:00 crc kubenswrapper[4793]: E0126 22:58:00.261287 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" podUID="86ca275a-4c50-479e-9ed5-4e03dda309cf" Jan 26 22:58:16 crc kubenswrapper[4793]: I0126 22:58:16.388530 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" event={"ID":"86ca275a-4c50-479e-9ed5-4e03dda309cf","Type":"ContainerStarted","Data":"eb3ac00bcfde1ae7938c10fab76f46803fbd127e699542fd66b42789e5d4c882"} Jan 26 22:58:16 crc kubenswrapper[4793]: I0126 22:58:16.389828 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:58:16 crc kubenswrapper[4793]: I0126 22:58:16.425538 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" podStartSLOduration=34.577739177 podStartE2EDuration="1m3.425511135s" podCreationTimestamp="2026-01-26 22:57:13 +0000 UTC" firstStartedPulling="2026-01-26 22:57:46.601859405 +0000 UTC m=+1081.590630927" lastFinishedPulling="2026-01-26 22:58:15.449631333 +0000 UTC m=+1110.438402885" observedRunningTime="2026-01-26 22:58:16.423902891 +0000 UTC m=+1111.412674403" watchObservedRunningTime="2026-01-26 22:58:16.425511135 +0000 UTC m=+1111.414282647" Jan 26 22:58:18 crc kubenswrapper[4793]: I0126 22:58:18.323016 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:58:18 crc kubenswrapper[4793]: I0126 22:58:18.323563 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:58:26 crc kubenswrapper[4793]: I0126 22:58:26.070786 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.230128 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.231993 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.235023 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-plugins-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.235089 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"kube-root-ca.crt" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.236374 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openshift-service-ca.crt" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.236550 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-server-dockercfg-pnjfr" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.236702 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-server-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.237187 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-erlang-cookie" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.240421 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-default-user" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.256740 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.375889 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-server-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.375952 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376012 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqhc4\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-kube-api-access-cqhc4\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376043 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376166 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376241 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376393 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/733142d4-49c6-4e25-a160-78aa1118d296-pod-info\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376465 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.376508 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/733142d4-49c6-4e25-a160-78aa1118d296-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478281 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/733142d4-49c6-4e25-a160-78aa1118d296-pod-info\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478394 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478422 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/733142d4-49c6-4e25-a160-78aa1118d296-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478443 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-server-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478463 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478498 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqhc4\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-kube-api-access-cqhc4\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478518 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.478566 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.479387 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.479722 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.479784 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/733142d4-49c6-4e25-a160-78aa1118d296-server-conf\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.479747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.483134 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.483175 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c84253ff34d7709066a0f64547ac926619bbfc7fcf55095c6f0d9139388f0b03/globalmount\"" pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.486989 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/733142d4-49c6-4e25-a160-78aa1118d296-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.487127 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/733142d4-49c6-4e25-a160-78aa1118d296-pod-info\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.487175 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.513446 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee975f4-67e8-48bd-93f2-1fe39eea72cb\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.513874 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqhc4\" (UniqueName: \"kubernetes.io/projected/733142d4-49c6-4e25-a160-78aa1118d296-kube-api-access-cqhc4\") pod \"rabbitmq-server-0\" (UID: \"733142d4-49c6-4e25-a160-78aa1118d296\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.535334 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.537587 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.542977 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-default-user" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.543065 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-plugins-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.543282 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-erlang-cookie" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.543347 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-dockercfg-lmfsk" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.543368 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.548908 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.562316 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.690454 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.690837 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.690871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.690913 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.690942 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.691053 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.691112 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.691217 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwckb\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-kube-api-access-kwckb\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.691258 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.795938 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796020 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796075 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796112 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796138 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796168 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796226 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwckb\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-kube-api-access-kwckb\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.796341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.804415 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.805346 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.806193 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.806294 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.806673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.808760 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.809448 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.809845 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.811056 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.822581 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-default-user" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.823002 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-erlang-cookie" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.823320 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-server-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.823567 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-server-dockercfg-752gh" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.823772 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-plugins-conf" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.824507 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.827047 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.827101 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c1653d9c4aeb4b7737476ff27069b6a90b3652e944ebf43c1da6f7c30c8acb21/globalmount\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.840663 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwckb\" (UniqueName: \"kubernetes.io/projected/4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a-kube-api-access-kwckb\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.876730 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ae7e116-00a5-4343-8d68-c4c4bff49d81\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.893543 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.897912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.897956 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.897998 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.898049 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-96ac8990-6644-43aa-90a6-7184f261a339\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ac8990-6644-43aa-90a6-7184f261a339\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.898249 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.898361 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.900520 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.900611 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:35 crc kubenswrapper[4793]: I0126 22:58:35.900857 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqzdq\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-kube-api-access-kqzdq\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002403 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002527 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqzdq\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-kube-api-access-kqzdq\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002551 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002578 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002628 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002687 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-96ac8990-6644-43aa-90a6-7184f261a339\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ac8990-6644-43aa-90a6-7184f261a339\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002731 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.002758 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.004355 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.008774 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.008827 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.009692 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.010013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.010075 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.011766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.012086 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.012117 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-96ac8990-6644-43aa-90a6-7184f261a339\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ac8990-6644-43aa-90a6-7184f261a339\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1b65e6087beb8f396d5bde46e8010e1ab98c55dd5e3fbf5a99be8a053675d77/globalmount\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.023132 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.024507 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.036533 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqzdq\" (UniqueName: \"kubernetes.io/projected/a8704cd7-a5e9-45ca-9886-cef2f797c7f1-kube-api-access-kqzdq\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.036802 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-dockercfg-9nz9x" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.037149 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-svc" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.039146 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-scripts" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.039421 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config-data" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.044155 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"combined-ca-bundle" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.067131 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.068121 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-96ac8990-6644-43aa-90a6-7184f261a339\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ac8990-6644-43aa-90a6-7184f261a339\") pod \"rabbitmq-cell1-server-0\" (UID: \"a8704cd7-a5e9-45ca-9886-cef2f797c7f1\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.087627 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.105624 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107452 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107591 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxrmt\" (UniqueName: \"kubernetes.io/projected/9cf954a6-b71a-49f9-93c5-618d4e944159-kube-api-access-cxrmt\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107890 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be60587c-fb74-41f0-9193-a9735fe5320f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be60587c-fb74-41f0-9193-a9735fe5320f\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.107991 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.108082 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.139451 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.145022 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.147112 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.155974 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.156364 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-notifications-server-dockercfg-9s6l9" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.156572 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-notifications-default-user" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.156771 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-notifications-plugins-conf" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.156966 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-notifications-server-conf" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.174376 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210399 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210496 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6n7w\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-kube-api-access-l6n7w\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210533 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210559 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70c43064-b9c2-4f01-9bc2-5431c5dca494-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210588 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be60587c-fb74-41f0-9193-a9735fe5320f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be60587c-fb74-41f0-9193-a9735fe5320f\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210609 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210664 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210689 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210724 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210770 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210799 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210827 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxrmt\" (UniqueName: \"kubernetes.io/projected/9cf954a6-b71a-49f9-93c5-618d4e944159-kube-api-access-cxrmt\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70c43064-b9c2-4f01-9bc2-5431c5dca494-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.210942 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.212286 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.212720 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.213713 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.214481 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf954a6-b71a-49f9-93c5-618d4e944159-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.225319 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.226728 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.226758 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be60587c-fb74-41f0-9193-a9735fe5320f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be60587c-fb74-41f0-9193-a9735fe5320f\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/49d64d0e1d76b735b81103bcddd20a8a89d874a94e2a3556534488c056649f70/globalmount\"" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.228541 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf954a6-b71a-49f9-93c5-618d4e944159-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.233445 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxrmt\" (UniqueName: \"kubernetes.io/projected/9cf954a6-b71a-49f9-93c5-618d4e944159-kube-api-access-cxrmt\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.301051 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be60587c-fb74-41f0-9193-a9735fe5320f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be60587c-fb74-41f0-9193-a9735fe5320f\") pod \"openstack-galera-0\" (UID: \"9cf954a6-b71a-49f9-93c5-618d4e944159\") " pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316104 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70c43064-b9c2-4f01-9bc2-5431c5dca494-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316192 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316260 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316292 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316324 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316358 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70c43064-b9c2-4f01-9bc2-5431c5dca494-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316438 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316483 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.316524 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6n7w\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-kube-api-access-l6n7w\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.324277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.324566 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.324987 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.325318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70c43064-b9c2-4f01-9bc2-5431c5dca494-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.325991 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70c43064-b9c2-4f01-9bc2-5431c5dca494-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.326297 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70c43064-b9c2-4f01-9bc2-5431c5dca494-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.329862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70c43064-b9c2-4f01-9bc2-5431c5dca494-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.331389 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.331446 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5b5053d94bc750fff8de7b0b157012f63097405e6337ff112a2f4d24dfa8b4d/globalmount\"" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.358078 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6n7w\" (UniqueName: \"kubernetes.io/projected/70c43064-b9c2-4f01-9bc2-5431c5dca494-kube-api-access-l6n7w\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.365316 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.381633 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.435002 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25d62f2c-bdb0-46f0-a08c-5e382b96747a\") pod \"rabbitmq-notifications-server-0\" (UID: \"70c43064-b9c2-4f01-9bc2-5431c5dca494\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.485566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.497841 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.592793 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"733142d4-49c6-4e25-a160-78aa1118d296","Type":"ContainerStarted","Data":"db624f0f3d3cb164ac2b7a1a8b687d140077b13e2c996d37dd1dc3311975ddd7"} Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.608491 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a","Type":"ContainerStarted","Data":"878a021024ca930bea18b6297f6a00be04621ac4fa1a27feb90ecbb735f4909d"} Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.653745 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"a8704cd7-a5e9-45ca-9886-cef2f797c7f1","Type":"ContainerStarted","Data":"4817f3f121ed18c631ca49dc418d7fe8c4a8f3ab69cf8b3bca4b17b3a8d1a830"} Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.776109 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.780885 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.783329 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"memcached-memcached-dockercfg-9hdb7" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.784105 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"memcached-config-data" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.792608 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.837634 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsllm\" (UniqueName: \"kubernetes.io/projected/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kube-api-access-zsllm\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.837697 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-config-data\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.837763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kolla-config\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.939829 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kolla-config\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.939956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsllm\" (UniqueName: \"kubernetes.io/projected/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kube-api-access-zsllm\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.939990 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-config-data\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.940971 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-config-data\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.941407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kolla-config\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.962132 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsllm\" (UniqueName: \"kubernetes.io/projected/d933c86b-bb42-4d94-9eb2-65888a5e95ab-kube-api-access-zsllm\") pod \"memcached-0\" (UID: \"d933c86b-bb42-4d94-9eb2-65888a5e95ab\") " pod="nova-kuttl-default/memcached-0" Jan 26 22:58:36 crc kubenswrapper[4793]: I0126 22:58:36.996924 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.108359 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.128226 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 22:58:37 crc kubenswrapper[4793]: W0126 22:58:37.129606 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c43064_b9c2_4f01_9bc2_5431c5dca494.slice/crio-316c559c4180b890f674527d6a80bfa7c26cae158117ac7275ee545091690ca2 WatchSource:0}: Error finding container 316c559c4180b890f674527d6a80bfa7c26cae158117ac7275ee545091690ca2: Status 404 returned error can't find the container with id 316c559c4180b890f674527d6a80bfa7c26cae158117ac7275ee545091690ca2 Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.592079 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 26 22:58:37 crc kubenswrapper[4793]: W0126 22:58:37.602591 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd933c86b_bb42_4d94_9eb2_65888a5e95ab.slice/crio-53f1bd395a3c556cfe8d89fdb59756199d13803ddf8f08098845495f4a385a50 WatchSource:0}: Error finding container 53f1bd395a3c556cfe8d89fdb59756199d13803ddf8f08098845495f4a385a50: Status 404 returned error can't find the container with id 53f1bd395a3c556cfe8d89fdb59756199d13803ddf8f08098845495f4a385a50 Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.665668 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"9cf954a6-b71a-49f9-93c5-618d4e944159","Type":"ContainerStarted","Data":"ddbd86b057746cacd08a276158c06d9a8ef6d6941d330c26e761665ea5835474"} Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.668086 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"70c43064-b9c2-4f01-9bc2-5431c5dca494","Type":"ContainerStarted","Data":"316c559c4180b890f674527d6a80bfa7c26cae158117ac7275ee545091690ca2"} Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.669585 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"d933c86b-bb42-4d94-9eb2-65888a5e95ab","Type":"ContainerStarted","Data":"53f1bd395a3c556cfe8d89fdb59756199d13803ddf8f08098845495f4a385a50"} Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.707588 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.714308 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.718215 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-cell1-dockercfg-xg6r2" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.718441 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-scripts" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.721950 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-cell1-svc" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.729867 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-config-data" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.731380 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756652 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7r2m\" (UniqueName: \"kubernetes.io/projected/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kube-api-access-q7r2m\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756710 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-071cfedb-1347-4743-a581-3effab99c2f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-071cfedb-1347-4743-a581-3effab99c2f0\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756749 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756816 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756833 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756888 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756916 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.756962 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858016 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858099 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858122 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858167 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858211 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858256 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858283 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7r2m\" (UniqueName: \"kubernetes.io/projected/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kube-api-access-q7r2m\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.858303 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-071cfedb-1347-4743-a581-3effab99c2f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-071cfedb-1347-4743-a581-3effab99c2f0\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.859231 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.859667 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.861028 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.864433 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc3daf2-84e7-4831-9eae-155632f1b0cd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.868023 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.868070 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-071cfedb-1347-4743-a581-3effab99c2f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-071cfedb-1347-4743-a581-3effab99c2f0\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/85fb299fc637fb168ba164470fbe00a2809160fb168d25cec10801b57445b91c/globalmount\"" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.870556 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.877392 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fc3daf2-84e7-4831-9eae-155632f1b0cd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.878059 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7r2m\" (UniqueName: \"kubernetes.io/projected/1fc3daf2-84e7-4831-9eae-155632f1b0cd-kube-api-access-q7r2m\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:37 crc kubenswrapper[4793]: I0126 22:58:37.907086 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-071cfedb-1347-4743-a581-3effab99c2f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-071cfedb-1347-4743-a581-3effab99c2f0\") pod \"openstack-cell1-galera-0\" (UID: \"1fc3daf2-84e7-4831-9eae-155632f1b0cd\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:38 crc kubenswrapper[4793]: I0126 22:58:38.063033 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:38 crc kubenswrapper[4793]: I0126 22:58:38.661505 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 26 22:58:38 crc kubenswrapper[4793]: W0126 22:58:38.697979 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fc3daf2_84e7_4831_9eae_155632f1b0cd.slice/crio-136d2eada8dc8ad8ca0e43714623b1e36f1332a2625491cbc9da04375f4e14f0 WatchSource:0}: Error finding container 136d2eada8dc8ad8ca0e43714623b1e36f1332a2625491cbc9da04375f4e14f0: Status 404 returned error can't find the container with id 136d2eada8dc8ad8ca0e43714623b1e36f1332a2625491cbc9da04375f4e14f0 Jan 26 22:58:39 crc kubenswrapper[4793]: I0126 22:58:39.695535 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1fc3daf2-84e7-4831-9eae-155632f1b0cd","Type":"ContainerStarted","Data":"136d2eada8dc8ad8ca0e43714623b1e36f1332a2625491cbc9da04375f4e14f0"} Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.322804 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.323709 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.791030 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"d933c86b-bb42-4d94-9eb2-65888a5e95ab","Type":"ContainerStarted","Data":"2361625337e35c8eac0c41fe782b0be79cc75721f64f7d81c308cb9285165509"} Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.791479 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/memcached-0" Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.792614 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1fc3daf2-84e7-4831-9eae-155632f1b0cd","Type":"ContainerStarted","Data":"fdda41f7414d46997cb7867b1c5ac174c160d966e32baf0ef8109fbb3b4ec313"} Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.794022 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"9cf954a6-b71a-49f9-93c5-618d4e944159","Type":"ContainerStarted","Data":"53ed54adcffe723cc5c04ca3514e65945e26ee7b5398c857f9dbce255f565dfd"} Jan 26 22:58:48 crc kubenswrapper[4793]: I0126 22:58:48.811525 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/memcached-0" podStartSLOduration=1.9759169989999998 podStartE2EDuration="12.811502597s" podCreationTimestamp="2026-01-26 22:58:36 +0000 UTC" firstStartedPulling="2026-01-26 22:58:37.607350149 +0000 UTC m=+1132.596121661" lastFinishedPulling="2026-01-26 22:58:48.442935727 +0000 UTC m=+1143.431707259" observedRunningTime="2026-01-26 22:58:48.809130081 +0000 UTC m=+1143.797901603" watchObservedRunningTime="2026-01-26 22:58:48.811502597 +0000 UTC m=+1143.800274109" Jan 26 22:58:49 crc kubenswrapper[4793]: I0126 22:58:49.805286 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"a8704cd7-a5e9-45ca-9886-cef2f797c7f1","Type":"ContainerStarted","Data":"c0e1c08938d407156158ed4ade62f90fafca0be9a3c8b26ccbdd350915d858cd"} Jan 26 22:58:50 crc kubenswrapper[4793]: I0126 22:58:50.815858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"70c43064-b9c2-4f01-9bc2-5431c5dca494","Type":"ContainerStarted","Data":"dc3e39837d8e86561cf472a2f6f2745af62ce503d34f942a6ce713fb061ed1f7"} Jan 26 22:58:50 crc kubenswrapper[4793]: I0126 22:58:50.818335 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"733142d4-49c6-4e25-a160-78aa1118d296","Type":"ContainerStarted","Data":"df044600c026e05e6cd9488f72ff0a79fad01760d11999e101be3902e689d983"} Jan 26 22:58:50 crc kubenswrapper[4793]: I0126 22:58:50.820859 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a","Type":"ContainerStarted","Data":"ad58aa032b54033c90e8ecf954d6ad00a7b0dcbeedc6afe2b86ad4ad1a96e8ad"} Jan 26 22:58:54 crc kubenswrapper[4793]: I0126 22:58:54.848662 4793 generic.go:334] "Generic (PLEG): container finished" podID="1fc3daf2-84e7-4831-9eae-155632f1b0cd" containerID="fdda41f7414d46997cb7867b1c5ac174c160d966e32baf0ef8109fbb3b4ec313" exitCode=0 Jan 26 22:58:54 crc kubenswrapper[4793]: I0126 22:58:54.848838 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1fc3daf2-84e7-4831-9eae-155632f1b0cd","Type":"ContainerDied","Data":"fdda41f7414d46997cb7867b1c5ac174c160d966e32baf0ef8109fbb3b4ec313"} Jan 26 22:58:54 crc kubenswrapper[4793]: I0126 22:58:54.851942 4793 generic.go:334] "Generic (PLEG): container finished" podID="9cf954a6-b71a-49f9-93c5-618d4e944159" containerID="53ed54adcffe723cc5c04ca3514e65945e26ee7b5398c857f9dbce255f565dfd" exitCode=0 Jan 26 22:58:54 crc kubenswrapper[4793]: I0126 22:58:54.851985 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"9cf954a6-b71a-49f9-93c5-618d4e944159","Type":"ContainerDied","Data":"53ed54adcffe723cc5c04ca3514e65945e26ee7b5398c857f9dbce255f565dfd"} Jan 26 22:58:55 crc kubenswrapper[4793]: I0126 22:58:55.863856 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1fc3daf2-84e7-4831-9eae-155632f1b0cd","Type":"ContainerStarted","Data":"883c50ad0901303930f73ed00a7cb86060246fc7d8c8bd00c8c17ae567cf99ab"} Jan 26 22:58:55 crc kubenswrapper[4793]: I0126 22:58:55.866837 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"9cf954a6-b71a-49f9-93c5-618d4e944159","Type":"ContainerStarted","Data":"188bcef44c0b27f096f082d8b315d54f7b30cc57cf17a49ab9eabde5c537cb12"} Jan 26 22:58:55 crc kubenswrapper[4793]: I0126 22:58:55.907015 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-cell1-galera-0" podStartSLOduration=10.168277259 podStartE2EDuration="19.906990903s" podCreationTimestamp="2026-01-26 22:58:36 +0000 UTC" firstStartedPulling="2026-01-26 22:58:38.702065453 +0000 UTC m=+1133.690836965" lastFinishedPulling="2026-01-26 22:58:48.440779077 +0000 UTC m=+1143.429550609" observedRunningTime="2026-01-26 22:58:55.901277083 +0000 UTC m=+1150.890048615" watchObservedRunningTime="2026-01-26 22:58:55.906990903 +0000 UTC m=+1150.895762425" Jan 26 22:58:55 crc kubenswrapper[4793]: I0126 22:58:55.934848 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-galera-0" podStartSLOduration=10.458058645 podStartE2EDuration="21.934826201s" podCreationTimestamp="2026-01-26 22:58:34 +0000 UTC" firstStartedPulling="2026-01-26 22:58:37.010196801 +0000 UTC m=+1131.998968313" lastFinishedPulling="2026-01-26 22:58:48.486964347 +0000 UTC m=+1143.475735869" observedRunningTime="2026-01-26 22:58:55.923433432 +0000 UTC m=+1150.912204964" watchObservedRunningTime="2026-01-26 22:58:55.934826201 +0000 UTC m=+1150.923597713" Jan 26 22:58:56 crc kubenswrapper[4793]: I0126 22:58:56.382347 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:56 crc kubenswrapper[4793]: I0126 22:58:56.382384 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:58:57 crc kubenswrapper[4793]: I0126 22:58:57.110382 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/memcached-0" Jan 26 22:58:58 crc kubenswrapper[4793]: I0126 22:58:58.063591 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:58:58 crc kubenswrapper[4793]: I0126 22:58:58.063787 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:59:00 crc kubenswrapper[4793]: I0126 22:59:00.139538 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:59:00 crc kubenswrapper[4793]: I0126 22:59:00.238140 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 26 22:59:00 crc kubenswrapper[4793]: I0126 22:59:00.484511 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:59:00 crc kubenswrapper[4793]: I0126 22:59:00.567864 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-galera-0" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.128695 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-pf48f"] Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.132773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.136789 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.144770 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-pf48f"] Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.199189 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.199255 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cldt\" (UniqueName: \"kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.300892 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.301320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cldt\" (UniqueName: \"kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.302413 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.335649 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cldt\" (UniqueName: \"kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt\") pod \"root-account-create-update-pf48f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.458985 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:05 crc kubenswrapper[4793]: I0126 22:59:05.949382 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-pf48f"] Jan 26 22:59:05 crc kubenswrapper[4793]: W0126 22:59:05.952565 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55b57ce0_a45c_41a2_a54b_68cd4bb0996f.slice/crio-b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b WatchSource:0}: Error finding container b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b: Status 404 returned error can't find the container with id b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.477979 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-create-566zk"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.479469 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.513071 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-566zk"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.521667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkbrg\" (UniqueName: \"kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.521824 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.603887 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-517e-account-create-update-z5vlt"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.605710 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.608782 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.612638 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-517e-account-create-update-z5vlt"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.625117 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.628410 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kfrr\" (UniqueName: \"kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.628522 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkbrg\" (UniqueName: \"kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.628700 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.630128 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.668583 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkbrg\" (UniqueName: \"kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg\") pod \"keystone-db-create-566zk\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.731374 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kfrr\" (UniqueName: \"kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.732323 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.733237 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.748129 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kfrr\" (UniqueName: \"kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr\") pod \"keystone-517e-account-create-update-z5vlt\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.807120 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.892705 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-create-bz7cb"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.894302 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.902362 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-bz7cb"] Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.930301 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.937316 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.937396 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmfg8\" (UniqueName: \"kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.954577 4793 generic.go:334] "Generic (PLEG): container finished" podID="55b57ce0-a45c-41a2-a54b-68cd4bb0996f" containerID="89a414f6ad4eff334e7c77f7dc32f2bcaf583553d9176e51a8eb3503f48b90e9" exitCode=0 Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.954623 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-pf48f" event={"ID":"55b57ce0-a45c-41a2-a54b-68cd4bb0996f","Type":"ContainerDied","Data":"89a414f6ad4eff334e7c77f7dc32f2bcaf583553d9176e51a8eb3503f48b90e9"} Jan 26 22:59:06 crc kubenswrapper[4793]: I0126 22:59:06.954656 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-pf48f" event={"ID":"55b57ce0-a45c-41a2-a54b-68cd4bb0996f","Type":"ContainerStarted","Data":"b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.015455 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-ca47-account-create-update-8c675"] Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.017271 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.021534 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.026698 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-ca47-account-create-update-8c675"] Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.045625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmfg8\" (UniqueName: \"kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.045744 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjt59\" (UniqueName: \"kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.045783 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.045809 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.046732 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.063591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmfg8\" (UniqueName: \"kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8\") pod \"placement-db-create-bz7cb\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.147165 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjt59\" (UniqueName: \"kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.149554 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.148199 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.164035 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjt59\" (UniqueName: \"kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59\") pod \"placement-ca47-account-create-update-8c675\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.231857 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.320591 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-566zk"] Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.346756 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.416148 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-517e-account-create-update-z5vlt"] Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.500414 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-bz7cb"] Jan 26 22:59:07 crc kubenswrapper[4793]: W0126 22:59:07.503500 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod466a301d_4d91_4f49_831a_7d7f07ecd1bd.slice/crio-60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075 WatchSource:0}: Error finding container 60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075: Status 404 returned error can't find the container with id 60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075 Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.786791 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-ca47-account-create-update-8c675"] Jan 26 22:59:07 crc kubenswrapper[4793]: W0126 22:59:07.787041 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05214d2f_078c_43c5_bf4d_c8d80580ce8f.slice/crio-c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb WatchSource:0}: Error finding container c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb: Status 404 returned error can't find the container with id c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.963326 4793 generic.go:334] "Generic (PLEG): container finished" podID="466a301d-4d91-4f49-831a-7d7f07ecd1bd" containerID="4e539782db86a1f33f1e8d758da83db10df681ef54a932f02b39da2a8adb6252" exitCode=0 Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.963402 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-bz7cb" event={"ID":"466a301d-4d91-4f49-831a-7d7f07ecd1bd","Type":"ContainerDied","Data":"4e539782db86a1f33f1e8d758da83db10df681ef54a932f02b39da2a8adb6252"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.963451 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-bz7cb" event={"ID":"466a301d-4d91-4f49-831a-7d7f07ecd1bd","Type":"ContainerStarted","Data":"60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.965080 4793 generic.go:334] "Generic (PLEG): container finished" podID="a62f2482-4c97-42ed-92d7-1b0dc5319971" containerID="df5d497c42d8a41e51dcee4a447c2fbe1e944067666c916ccbedfee89f899ce4" exitCode=0 Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.965156 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" event={"ID":"a62f2482-4c97-42ed-92d7-1b0dc5319971","Type":"ContainerDied","Data":"df5d497c42d8a41e51dcee4a447c2fbe1e944067666c916ccbedfee89f899ce4"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.965226 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" event={"ID":"a62f2482-4c97-42ed-92d7-1b0dc5319971","Type":"ContainerStarted","Data":"1c043068139cd4d00e5b5fe1b6d930a2b0283b12081e9e25bfb54c733f1d3512"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.966800 4793 generic.go:334] "Generic (PLEG): container finished" podID="74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" containerID="2b2b55f201bdc2b2f238cbda3668509af74a34b70e97e47426e1788b79f37c2e" exitCode=0 Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.966900 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-566zk" event={"ID":"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a","Type":"ContainerDied","Data":"2b2b55f201bdc2b2f238cbda3668509af74a34b70e97e47426e1788b79f37c2e"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.966931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-566zk" event={"ID":"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a","Type":"ContainerStarted","Data":"1d500a4e24123c868163e68fc793bc5704da2cb708739917273b7d7a3b805aa6"} Jan 26 22:59:07 crc kubenswrapper[4793]: I0126 22:59:07.968065 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" event={"ID":"05214d2f-078c-43c5-bf4d-c8d80580ce8f","Type":"ContainerStarted","Data":"c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb"} Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.263636 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.377511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts\") pod \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.377558 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cldt\" (UniqueName: \"kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt\") pod \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\" (UID: \"55b57ce0-a45c-41a2-a54b-68cd4bb0996f\") " Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.378686 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55b57ce0-a45c-41a2-a54b-68cd4bb0996f" (UID: "55b57ce0-a45c-41a2-a54b-68cd4bb0996f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.384360 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt" (OuterVolumeSpecName: "kube-api-access-6cldt") pod "55b57ce0-a45c-41a2-a54b-68cd4bb0996f" (UID: "55b57ce0-a45c-41a2-a54b-68cd4bb0996f"). InnerVolumeSpecName "kube-api-access-6cldt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.479427 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.479482 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cldt\" (UniqueName: \"kubernetes.io/projected/55b57ce0-a45c-41a2-a54b-68cd4bb0996f-kube-api-access-6cldt\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.986184 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pf48f" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.986182 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-pf48f" event={"ID":"55b57ce0-a45c-41a2-a54b-68cd4bb0996f","Type":"ContainerDied","Data":"b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b"} Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.986309 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b685a3801a5169ac544af63d8e9900763cc63380a70a4da72c5d4e006103171b" Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.989386 4793 generic.go:334] "Generic (PLEG): container finished" podID="05214d2f-078c-43c5-bf4d-c8d80580ce8f" containerID="5222c87c78a1d9a9de00844bf7dc07187aeb8cf7d19078f05c743d77635dc81a" exitCode=0 Jan 26 22:59:08 crc kubenswrapper[4793]: I0126 22:59:08.989507 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" event={"ID":"05214d2f-078c-43c5-bf4d-c8d80580ce8f","Type":"ContainerDied","Data":"5222c87c78a1d9a9de00844bf7dc07187aeb8cf7d19078f05c743d77635dc81a"} Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.350637 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.395314 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkbrg\" (UniqueName: \"kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg\") pod \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.395418 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts\") pod \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\" (UID: \"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.396402 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" (UID: "74d39f02-fd88-4b8f-8f71-0f4f1386ad9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.396997 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.402591 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg" (OuterVolumeSpecName: "kube-api-access-vkbrg") pod "74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" (UID: "74d39f02-fd88-4b8f-8f71-0f4f1386ad9a"). InnerVolumeSpecName "kube-api-access-vkbrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.438027 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.444019 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498006 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts\") pod \"a62f2482-4c97-42ed-92d7-1b0dc5319971\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498077 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kfrr\" (UniqueName: \"kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr\") pod \"a62f2482-4c97-42ed-92d7-1b0dc5319971\" (UID: \"a62f2482-4c97-42ed-92d7-1b0dc5319971\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498100 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts\") pod \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498159 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmfg8\" (UniqueName: \"kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8\") pod \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\" (UID: \"466a301d-4d91-4f49-831a-7d7f07ecd1bd\") " Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498403 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkbrg\" (UniqueName: \"kubernetes.io/projected/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a-kube-api-access-vkbrg\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.498630 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a62f2482-4c97-42ed-92d7-1b0dc5319971" (UID: "a62f2482-4c97-42ed-92d7-1b0dc5319971"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.499422 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "466a301d-4d91-4f49-831a-7d7f07ecd1bd" (UID: "466a301d-4d91-4f49-831a-7d7f07ecd1bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.501553 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8" (OuterVolumeSpecName: "kube-api-access-pmfg8") pod "466a301d-4d91-4f49-831a-7d7f07ecd1bd" (UID: "466a301d-4d91-4f49-831a-7d7f07ecd1bd"). InnerVolumeSpecName "kube-api-access-pmfg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.501642 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr" (OuterVolumeSpecName: "kube-api-access-5kfrr") pod "a62f2482-4c97-42ed-92d7-1b0dc5319971" (UID: "a62f2482-4c97-42ed-92d7-1b0dc5319971"). InnerVolumeSpecName "kube-api-access-5kfrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.600068 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kfrr\" (UniqueName: \"kubernetes.io/projected/a62f2482-4c97-42ed-92d7-1b0dc5319971-kube-api-access-5kfrr\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.600446 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466a301d-4d91-4f49-831a-7d7f07ecd1bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.600467 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmfg8\" (UniqueName: \"kubernetes.io/projected/466a301d-4d91-4f49-831a-7d7f07ecd1bd-kube-api-access-pmfg8\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.600487 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f2482-4c97-42ed-92d7-1b0dc5319971-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.999304 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-566zk" event={"ID":"74d39f02-fd88-4b8f-8f71-0f4f1386ad9a","Type":"ContainerDied","Data":"1d500a4e24123c868163e68fc793bc5704da2cb708739917273b7d7a3b805aa6"} Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.999347 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d500a4e24123c868163e68fc793bc5704da2cb708739917273b7d7a3b805aa6" Jan 26 22:59:09 crc kubenswrapper[4793]: I0126 22:59:09.999348 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-566zk" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.001208 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-bz7cb" event={"ID":"466a301d-4d91-4f49-831a-7d7f07ecd1bd","Type":"ContainerDied","Data":"60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075"} Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.001242 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60162eaba3c7244073b5a952a7623850b6d7701f3c341cdecbbc700809199075" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.001256 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-bz7cb" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.002832 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" event={"ID":"a62f2482-4c97-42ed-92d7-1b0dc5319971","Type":"ContainerDied","Data":"1c043068139cd4d00e5b5fe1b6d930a2b0283b12081e9e25bfb54c733f1d3512"} Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.002864 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-517e-account-create-update-z5vlt" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.002867 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c043068139cd4d00e5b5fe1b6d930a2b0283b12081e9e25bfb54c733f1d3512" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.365789 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.436035 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjt59\" (UniqueName: \"kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59\") pod \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.436131 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts\") pod \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\" (UID: \"05214d2f-078c-43c5-bf4d-c8d80580ce8f\") " Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.436806 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05214d2f-078c-43c5-bf4d-c8d80580ce8f" (UID: "05214d2f-078c-43c5-bf4d-c8d80580ce8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.440328 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59" (OuterVolumeSpecName: "kube-api-access-bjt59") pod "05214d2f-078c-43c5-bf4d-c8d80580ce8f" (UID: "05214d2f-078c-43c5-bf4d-c8d80580ce8f"). InnerVolumeSpecName "kube-api-access-bjt59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.538270 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjt59\" (UniqueName: \"kubernetes.io/projected/05214d2f-078c-43c5-bf4d-c8d80580ce8f-kube-api-access-bjt59\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:10 crc kubenswrapper[4793]: I0126 22:59:10.538311 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05214d2f-078c-43c5-bf4d-c8d80580ce8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:11 crc kubenswrapper[4793]: I0126 22:59:11.014720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" event={"ID":"05214d2f-078c-43c5-bf4d-c8d80580ce8f","Type":"ContainerDied","Data":"c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb"} Jan 26 22:59:11 crc kubenswrapper[4793]: I0126 22:59:11.014799 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c79a0169582dedddc18b9f72c282cb8add01ed17e2663a9a0d5ebbb8a8a2eb" Jan 26 22:59:11 crc kubenswrapper[4793]: I0126 22:59:11.014958 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ca47-account-create-update-8c675" Jan 26 22:59:11 crc kubenswrapper[4793]: I0126 22:59:11.816575 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-pf48f"] Jan 26 22:59:11 crc kubenswrapper[4793]: I0126 22:59:11.826255 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-pf48f"] Jan 26 22:59:13 crc kubenswrapper[4793]: I0126 22:59:13.776284 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b57ce0-a45c-41a2-a54b-68cd4bb0996f" path="/var/lib/kubelet/pods/55b57ce0-a45c-41a2-a54b-68cd4bb0996f/volumes" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.804312 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-hrh5x"] Jan 26 22:59:16 crc kubenswrapper[4793]: E0126 22:59:16.805299 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f2482-4c97-42ed-92d7-1b0dc5319971" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805316 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f2482-4c97-42ed-92d7-1b0dc5319971" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: E0126 22:59:16.805344 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466a301d-4d91-4f49-831a-7d7f07ecd1bd" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805354 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="466a301d-4d91-4f49-831a-7d7f07ecd1bd" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: E0126 22:59:16.805375 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05214d2f-078c-43c5-bf4d-c8d80580ce8f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805384 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="05214d2f-078c-43c5-bf4d-c8d80580ce8f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: E0126 22:59:16.805398 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805406 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: E0126 22:59:16.805415 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b57ce0-a45c-41a2-a54b-68cd4bb0996f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805423 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b57ce0-a45c-41a2-a54b-68cd4bb0996f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805629 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f2482-4c97-42ed-92d7-1b0dc5319971" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805641 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b57ce0-a45c-41a2-a54b-68cd4bb0996f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805660 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="466a301d-4d91-4f49-831a-7d7f07ecd1bd" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805675 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="05214d2f-078c-43c5-bf4d-c8d80580ce8f" containerName="mariadb-account-create-update" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.805691 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" containerName="mariadb-database-create" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.806329 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.808649 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.812808 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-hrh5x"] Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.865599 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6qwl\" (UniqueName: \"kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.865723 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.967706 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6qwl\" (UniqueName: \"kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.967816 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.969957 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:16 crc kubenswrapper[4793]: I0126 22:59:16.989334 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6qwl\" (UniqueName: \"kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl\") pod \"root-account-create-update-hrh5x\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:17 crc kubenswrapper[4793]: I0126 22:59:17.134696 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:17 crc kubenswrapper[4793]: I0126 22:59:17.393748 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-hrh5x"] Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.086568 4793 generic.go:334] "Generic (PLEG): container finished" podID="bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" containerID="cbbda6e47228c7c1d12d5254e7a2e2c7a763c76bfeae45a791419e764951d10e" exitCode=0 Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.086656 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-hrh5x" event={"ID":"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a","Type":"ContainerDied","Data":"cbbda6e47228c7c1d12d5254e7a2e2c7a763c76bfeae45a791419e764951d10e"} Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.087003 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-hrh5x" event={"ID":"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a","Type":"ContainerStarted","Data":"73979830749be87b355aab2585ef0b9dd7cc4eb559480d528575bcafe7ac1ba2"} Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.322079 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.322166 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.322291 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.323135 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 22:59:18 crc kubenswrapper[4793]: I0126 22:59:18.323304 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587" gracePeriod=600 Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.094837 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587" exitCode=0 Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.094908 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587"} Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.096327 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3"} Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.096384 4793 scope.go:117] "RemoveContainer" containerID="2270771b37172879cefcac364640536fda7d04596bd8eec13cf97ac1bcd6539c" Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.402678 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.514365 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6qwl\" (UniqueName: \"kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl\") pod \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.514469 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts\") pod \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\" (UID: \"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a\") " Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.515360 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" (UID: "bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.520083 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl" (OuterVolumeSpecName: "kube-api-access-h6qwl") pod "bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" (UID: "bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a"). InnerVolumeSpecName "kube-api-access-h6qwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.616665 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:19 crc kubenswrapper[4793]: I0126 22:59:19.616714 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6qwl\" (UniqueName: \"kubernetes.io/projected/bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a-kube-api-access-h6qwl\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:20 crc kubenswrapper[4793]: I0126 22:59:20.106467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-hrh5x" event={"ID":"bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a","Type":"ContainerDied","Data":"73979830749be87b355aab2585ef0b9dd7cc4eb559480d528575bcafe7ac1ba2"} Jan 26 22:59:20 crc kubenswrapper[4793]: I0126 22:59:20.106515 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73979830749be87b355aab2585ef0b9dd7cc4eb559480d528575bcafe7ac1ba2" Jan 26 22:59:20 crc kubenswrapper[4793]: I0126 22:59:20.106536 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-hrh5x" Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.152176 4793 generic.go:334] "Generic (PLEG): container finished" podID="733142d4-49c6-4e25-a160-78aa1118d296" containerID="df044600c026e05e6cd9488f72ff0a79fad01760d11999e101be3902e689d983" exitCode=0 Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.152344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"733142d4-49c6-4e25-a160-78aa1118d296","Type":"ContainerDied","Data":"df044600c026e05e6cd9488f72ff0a79fad01760d11999e101be3902e689d983"} Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.157185 4793 generic.go:334] "Generic (PLEG): container finished" podID="4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a" containerID="ad58aa032b54033c90e8ecf954d6ad00a7b0dcbeedc6afe2b86ad4ad1a96e8ad" exitCode=0 Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.157236 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a","Type":"ContainerDied","Data":"ad58aa032b54033c90e8ecf954d6ad00a7b0dcbeedc6afe2b86ad4ad1a96e8ad"} Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.161587 4793 generic.go:334] "Generic (PLEG): container finished" podID="70c43064-b9c2-4f01-9bc2-5431c5dca494" containerID="dc3e39837d8e86561cf472a2f6f2745af62ce503d34f942a6ce713fb061ed1f7" exitCode=0 Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.161685 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"70c43064-b9c2-4f01-9bc2-5431c5dca494","Type":"ContainerDied","Data":"dc3e39837d8e86561cf472a2f6f2745af62ce503d34f942a6ce713fb061ed1f7"} Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.169997 4793 generic.go:334] "Generic (PLEG): container finished" podID="a8704cd7-a5e9-45ca-9886-cef2f797c7f1" containerID="c0e1c08938d407156158ed4ade62f90fafca0be9a3c8b26ccbdd350915d858cd" exitCode=0 Jan 26 22:59:23 crc kubenswrapper[4793]: I0126 22:59:23.170059 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"a8704cd7-a5e9-45ca-9886-cef2f797c7f1","Type":"ContainerDied","Data":"c0e1c08938d407156158ed4ade62f90fafca0be9a3c8b26ccbdd350915d858cd"} Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.181648 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"70c43064-b9c2-4f01-9bc2-5431c5dca494","Type":"ContainerStarted","Data":"6e04fb78473a6e1d823d46381810bb168dd34ceb2332a4d5a5d581ce4f6a63e5"} Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.182132 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.184383 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"a8704cd7-a5e9-45ca-9886-cef2f797c7f1","Type":"ContainerStarted","Data":"3c2d0ca4f2464e59b4d0807e8205259a305d4756f944fedfdb17e808c33b6aa6"} Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.184693 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.186435 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"733142d4-49c6-4e25-a160-78aa1118d296","Type":"ContainerStarted","Data":"bc28bd9572d761503b3bdb0f59c6102f6fb3440ad5db561b927a8a1b6ecf4ee9"} Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.186655 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.188556 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a","Type":"ContainerStarted","Data":"15e92c80edfd73c107b78d52200b524418ac73e88b5179aac2d42e70b0d9da08"} Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.188773 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.208427 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=37.762116054 podStartE2EDuration="49.208408519s" podCreationTimestamp="2026-01-26 22:58:35 +0000 UTC" firstStartedPulling="2026-01-26 22:58:37.134124464 +0000 UTC m=+1132.122895976" lastFinishedPulling="2026-01-26 22:58:48.580416919 +0000 UTC m=+1143.569188441" observedRunningTime="2026-01-26 22:59:24.20451751 +0000 UTC m=+1179.193289022" watchObservedRunningTime="2026-01-26 22:59:24.208408519 +0000 UTC m=+1179.197180031" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.238161 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-server-0" podStartSLOduration=37.878208045 podStartE2EDuration="50.238142344s" podCreationTimestamp="2026-01-26 22:58:34 +0000 UTC" firstStartedPulling="2026-01-26 22:58:36.100101746 +0000 UTC m=+1131.088873258" lastFinishedPulling="2026-01-26 22:58:48.460036005 +0000 UTC m=+1143.448807557" observedRunningTime="2026-01-26 22:59:24.233304378 +0000 UTC m=+1179.222075920" watchObservedRunningTime="2026-01-26 22:59:24.238142344 +0000 UTC m=+1179.226913876" Jan 26 22:59:24 crc kubenswrapper[4793]: I0126 22:59:24.268735 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podStartSLOduration=38.324534733 podStartE2EDuration="50.268715082s" podCreationTimestamp="2026-01-26 22:58:34 +0000 UTC" firstStartedPulling="2026-01-26 22:58:36.554119305 +0000 UTC m=+1131.542890817" lastFinishedPulling="2026-01-26 22:58:48.498299654 +0000 UTC m=+1143.487071166" observedRunningTime="2026-01-26 22:59:24.262810197 +0000 UTC m=+1179.251581729" watchObservedRunningTime="2026-01-26 22:59:24.268715082 +0000 UTC m=+1179.257486594" Jan 26 22:59:35 crc kubenswrapper[4793]: I0126 22:59:35.552374 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-server-0" Jan 26 22:59:35 crc kubenswrapper[4793]: I0126 22:59:35.581894 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podStartSLOduration=49.530715379 podStartE2EDuration="1m1.581868468s" podCreationTimestamp="2026-01-26 22:58:34 +0000 UTC" firstStartedPulling="2026-01-26 22:58:36.413908046 +0000 UTC m=+1131.402679558" lastFinishedPulling="2026-01-26 22:58:48.465061135 +0000 UTC m=+1143.453832647" observedRunningTime="2026-01-26 22:59:24.288522518 +0000 UTC m=+1179.277294040" watchObservedRunningTime="2026-01-26 22:59:35.581868468 +0000 UTC m=+1190.570639970" Jan 26 22:59:35 crc kubenswrapper[4793]: I0126 22:59:35.897148 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.108842 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-sync-zhxm7"] Jan 26 22:59:36 crc kubenswrapper[4793]: E0126 22:59:36.109132 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" containerName="mariadb-account-create-update" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.109148 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" containerName="mariadb-account-create-update" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.109311 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a" containerName="mariadb-account-create-update" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.109799 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.113168 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.120228 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.120463 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.122066 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-56gjn" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.129313 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-zhxm7"] Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.154646 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.258244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.258364 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.258466 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpbg7\" (UniqueName: \"kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.360415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.360501 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.360595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpbg7\" (UniqueName: \"kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.366601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.373344 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.376612 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpbg7\" (UniqueName: \"kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7\") pod \"keystone-db-sync-zhxm7\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.429468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.489544 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 26 22:59:36 crc kubenswrapper[4793]: I0126 22:59:36.911898 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-zhxm7"] Jan 26 22:59:37 crc kubenswrapper[4793]: I0126 22:59:37.285037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-zhxm7" event={"ID":"288bfc86-c5a0-401c-bbda-ea48bbbd855c","Type":"ContainerStarted","Data":"35f6cec859d4c0cb4947d3b714b43934822523a0c753893837d0513004de5961"} Jan 26 22:59:43 crc kubenswrapper[4793]: I0126 22:59:43.329121 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-zhxm7" event={"ID":"288bfc86-c5a0-401c-bbda-ea48bbbd855c","Type":"ContainerStarted","Data":"3e4bdec32ef11b18d32eb2dfb6bb5b1e0c14f224fc1e48974b53e00695952bb5"} Jan 26 22:59:43 crc kubenswrapper[4793]: I0126 22:59:43.343536 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-db-sync-zhxm7" podStartSLOduration=1.4853572 podStartE2EDuration="7.343518272s" podCreationTimestamp="2026-01-26 22:59:36 +0000 UTC" firstStartedPulling="2026-01-26 22:59:36.916834451 +0000 UTC m=+1191.905605963" lastFinishedPulling="2026-01-26 22:59:42.774995493 +0000 UTC m=+1197.763767035" observedRunningTime="2026-01-26 22:59:43.343234944 +0000 UTC m=+1198.332006466" watchObservedRunningTime="2026-01-26 22:59:43.343518272 +0000 UTC m=+1198.332289784" Jan 26 22:59:46 crc kubenswrapper[4793]: I0126 22:59:46.366008 4793 generic.go:334] "Generic (PLEG): container finished" podID="288bfc86-c5a0-401c-bbda-ea48bbbd855c" containerID="3e4bdec32ef11b18d32eb2dfb6bb5b1e0c14f224fc1e48974b53e00695952bb5" exitCode=0 Jan 26 22:59:46 crc kubenswrapper[4793]: I0126 22:59:46.366077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-zhxm7" event={"ID":"288bfc86-c5a0-401c-bbda-ea48bbbd855c","Type":"ContainerDied","Data":"3e4bdec32ef11b18d32eb2dfb6bb5b1e0c14f224fc1e48974b53e00695952bb5"} Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.723131 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.845856 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data\") pod \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.846092 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpbg7\" (UniqueName: \"kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7\") pod \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.846138 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle\") pod \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\" (UID: \"288bfc86-c5a0-401c-bbda-ea48bbbd855c\") " Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.854937 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7" (OuterVolumeSpecName: "kube-api-access-qpbg7") pod "288bfc86-c5a0-401c-bbda-ea48bbbd855c" (UID: "288bfc86-c5a0-401c-bbda-ea48bbbd855c"). InnerVolumeSpecName "kube-api-access-qpbg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.867408 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "288bfc86-c5a0-401c-bbda-ea48bbbd855c" (UID: "288bfc86-c5a0-401c-bbda-ea48bbbd855c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.887223 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data" (OuterVolumeSpecName: "config-data") pod "288bfc86-c5a0-401c-bbda-ea48bbbd855c" (UID: "288bfc86-c5a0-401c-bbda-ea48bbbd855c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.948861 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpbg7\" (UniqueName: \"kubernetes.io/projected/288bfc86-c5a0-401c-bbda-ea48bbbd855c-kube-api-access-qpbg7\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.948928 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:47 crc kubenswrapper[4793]: I0126 22:59:47.948972 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288bfc86-c5a0-401c-bbda-ea48bbbd855c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.386048 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-zhxm7" event={"ID":"288bfc86-c5a0-401c-bbda-ea48bbbd855c","Type":"ContainerDied","Data":"35f6cec859d4c0cb4947d3b714b43934822523a0c753893837d0513004de5961"} Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.386110 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35f6cec859d4c0cb4947d3b714b43934822523a0c753893837d0513004de5961" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.386145 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-zhxm7" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.599590 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-xh8vs"] Jan 26 22:59:48 crc kubenswrapper[4793]: E0126 22:59:48.600085 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288bfc86-c5a0-401c-bbda-ea48bbbd855c" containerName="keystone-db-sync" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.600120 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="288bfc86-c5a0-401c-bbda-ea48bbbd855c" containerName="keystone-db-sync" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.600332 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="288bfc86-c5a0-401c-bbda-ea48bbbd855c" containerName="keystone-db-sync" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.602158 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.605565 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.605795 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.606018 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-56gjn" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.606235 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.606510 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.616834 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-xh8vs"] Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.761310 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntwrg\" (UniqueName: \"kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.762365 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.762511 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.762716 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.762897 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.763063 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.792487 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-sync-rn9xt"] Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.794004 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.799694 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-fvrsv" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.799930 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.800058 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.813597 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-rn9xt"] Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864128 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntwrg\" (UniqueName: \"kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864456 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864580 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864688 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864846 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.864993 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.868384 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.868727 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.870597 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.871747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.875330 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.890809 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntwrg\" (UniqueName: \"kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg\") pod \"keystone-bootstrap-xh8vs\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.926148 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.966554 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.966679 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.966729 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrkm5\" (UniqueName: \"kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.966763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:48 crc kubenswrapper[4793]: I0126 22:59:48.966792 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.068157 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.068243 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrkm5\" (UniqueName: \"kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.068271 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.068294 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.068349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.069295 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.082310 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.083322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.085477 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.093941 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrkm5\" (UniqueName: \"kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5\") pod \"placement-db-sync-rn9xt\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.115119 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.377623 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-xh8vs"] Jan 26 22:59:49 crc kubenswrapper[4793]: W0126 22:59:49.380514 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e3c9439_9c67_49d0_92be_f7b5aea5e408.slice/crio-062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0 WatchSource:0}: Error finding container 062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0: Status 404 returned error can't find the container with id 062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0 Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.394009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" event={"ID":"6e3c9439-9c67-49d0-92be-f7b5aea5e408","Type":"ContainerStarted","Data":"062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0"} Jan 26 22:59:49 crc kubenswrapper[4793]: I0126 22:59:49.591035 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-rn9xt"] Jan 26 22:59:50 crc kubenswrapper[4793]: I0126 22:59:50.404649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-rn9xt" event={"ID":"26147a27-d0db-498e-bf1d-fb496d6a7b48","Type":"ContainerStarted","Data":"ae3549f4820c063cd76f871e76554cea9d2d893a0f2e1a36623be4d87e460a34"} Jan 26 22:59:50 crc kubenswrapper[4793]: I0126 22:59:50.412931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" event={"ID":"6e3c9439-9c67-49d0-92be-f7b5aea5e408","Type":"ContainerStarted","Data":"fdea9b9e0baa0a798c51488874e92138a3f1af0ea746b2e505a1d80e6411576f"} Jan 26 22:59:50 crc kubenswrapper[4793]: I0126 22:59:50.446001 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" podStartSLOduration=2.445971472 podStartE2EDuration="2.445971472s" podCreationTimestamp="2026-01-26 22:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:59:50.438032879 +0000 UTC m=+1205.426804391" watchObservedRunningTime="2026-01-26 22:59:50.445971472 +0000 UTC m=+1205.434742984" Jan 26 22:59:52 crc kubenswrapper[4793]: I0126 22:59:52.425668 4793 generic.go:334] "Generic (PLEG): container finished" podID="6e3c9439-9c67-49d0-92be-f7b5aea5e408" containerID="fdea9b9e0baa0a798c51488874e92138a3f1af0ea746b2e505a1d80e6411576f" exitCode=0 Jan 26 22:59:52 crc kubenswrapper[4793]: I0126 22:59:52.425735 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" event={"ID":"6e3c9439-9c67-49d0-92be-f7b5aea5e408","Type":"ContainerDied","Data":"fdea9b9e0baa0a798c51488874e92138a3f1af0ea746b2e505a1d80e6411576f"} Jan 26 22:59:52 crc kubenswrapper[4793]: I0126 22:59:52.427348 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-rn9xt" event={"ID":"26147a27-d0db-498e-bf1d-fb496d6a7b48","Type":"ContainerStarted","Data":"42adf1d1b5d18c7fa51274887f38aad9de206abb589b3cb63586e984bd4dcb69"} Jan 26 22:59:52 crc kubenswrapper[4793]: I0126 22:59:52.459683 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-sync-rn9xt" podStartSLOduration=1.9339202979999999 podStartE2EDuration="4.459665277s" podCreationTimestamp="2026-01-26 22:59:48 +0000 UTC" firstStartedPulling="2026-01-26 22:59:49.588725088 +0000 UTC m=+1204.577496600" lastFinishedPulling="2026-01-26 22:59:52.114470067 +0000 UTC m=+1207.103241579" observedRunningTime="2026-01-26 22:59:52.455205181 +0000 UTC m=+1207.443976703" watchObservedRunningTime="2026-01-26 22:59:52.459665277 +0000 UTC m=+1207.448436789" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.794356 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.943737 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.944269 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.944297 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.944344 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntwrg\" (UniqueName: \"kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.944380 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.944406 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle\") pod \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\" (UID: \"6e3c9439-9c67-49d0-92be-f7b5aea5e408\") " Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.950786 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.951536 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.952008 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts" (OuterVolumeSpecName: "scripts") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.951990 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg" (OuterVolumeSpecName: "kube-api-access-ntwrg") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "kube-api-access-ntwrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.972901 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data" (OuterVolumeSpecName: "config-data") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:53 crc kubenswrapper[4793]: I0126 22:59:53.974455 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e3c9439-9c67-49d0-92be-f7b5aea5e408" (UID: "6e3c9439-9c67-49d0-92be-f7b5aea5e408"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045612 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntwrg\" (UniqueName: \"kubernetes.io/projected/6e3c9439-9c67-49d0-92be-f7b5aea5e408-kube-api-access-ntwrg\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045644 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045654 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045664 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045674 4793 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.045684 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3c9439-9c67-49d0-92be-f7b5aea5e408-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.454777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" event={"ID":"6e3c9439-9c67-49d0-92be-f7b5aea5e408","Type":"ContainerDied","Data":"062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0"} Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.454858 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="062809365bc93ec06fb12c51ac8551ee8aed14763725c8fe5de2e65a5d08b0c0" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.454891 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-xh8vs" Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.470755 4793 generic.go:334] "Generic (PLEG): container finished" podID="26147a27-d0db-498e-bf1d-fb496d6a7b48" containerID="42adf1d1b5d18c7fa51274887f38aad9de206abb589b3cb63586e984bd4dcb69" exitCode=0 Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.470834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-rn9xt" event={"ID":"26147a27-d0db-498e-bf1d-fb496d6a7b48","Type":"ContainerDied","Data":"42adf1d1b5d18c7fa51274887f38aad9de206abb589b3cb63586e984bd4dcb69"} Jan 26 22:59:54 crc kubenswrapper[4793]: I0126 22:59:54.992140 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-xh8vs"] Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.000648 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-xh8vs"] Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.080506 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-5hcwl"] Jan 26 22:59:55 crc kubenswrapper[4793]: E0126 22:59:55.080819 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3c9439-9c67-49d0-92be-f7b5aea5e408" containerName="keystone-bootstrap" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.080836 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3c9439-9c67-49d0-92be-f7b5aea5e408" containerName="keystone-bootstrap" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.080996 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3c9439-9c67-49d0-92be-f7b5aea5e408" containerName="keystone-bootstrap" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.081525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.086143 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.086163 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-56gjn" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.086863 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.086929 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.089521 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.110244 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-5hcwl"] Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.162595 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.162844 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.162966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbbr\" (UniqueName: \"kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.163071 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.163160 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.163297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264255 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264305 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264329 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbbr\" (UniqueName: \"kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264370 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.264430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.271470 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.271513 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.272265 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.272954 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.273643 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.301563 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbbr\" (UniqueName: \"kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr\") pod \"keystone-bootstrap-5hcwl\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.419651 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.778956 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e3c9439-9c67-49d0-92be-f7b5aea5e408" path="/var/lib/kubelet/pods/6e3c9439-9c67-49d0-92be-f7b5aea5e408/volumes" Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.843231 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-5hcwl"] Jan 26 22:59:55 crc kubenswrapper[4793]: W0126 22:59:55.843918 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66c97183_25d5_4ca6_bede_619df68d1471.slice/crio-1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a WatchSource:0}: Error finding container 1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a: Status 404 returned error can't find the container with id 1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a Jan 26 22:59:55 crc kubenswrapper[4793]: I0126 22:59:55.856255 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.009896 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrkm5\" (UniqueName: \"kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5\") pod \"26147a27-d0db-498e-bf1d-fb496d6a7b48\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.010021 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data\") pod \"26147a27-d0db-498e-bf1d-fb496d6a7b48\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.010149 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs\") pod \"26147a27-d0db-498e-bf1d-fb496d6a7b48\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.010283 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts\") pod \"26147a27-d0db-498e-bf1d-fb496d6a7b48\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.010658 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs" (OuterVolumeSpecName: "logs") pod "26147a27-d0db-498e-bf1d-fb496d6a7b48" (UID: "26147a27-d0db-498e-bf1d-fb496d6a7b48"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.011141 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle\") pod \"26147a27-d0db-498e-bf1d-fb496d6a7b48\" (UID: \"26147a27-d0db-498e-bf1d-fb496d6a7b48\") " Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.011660 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26147a27-d0db-498e-bf1d-fb496d6a7b48-logs\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.015370 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5" (OuterVolumeSpecName: "kube-api-access-mrkm5") pod "26147a27-d0db-498e-bf1d-fb496d6a7b48" (UID: "26147a27-d0db-498e-bf1d-fb496d6a7b48"). InnerVolumeSpecName "kube-api-access-mrkm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.017179 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts" (OuterVolumeSpecName: "scripts") pod "26147a27-d0db-498e-bf1d-fb496d6a7b48" (UID: "26147a27-d0db-498e-bf1d-fb496d6a7b48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.031945 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data" (OuterVolumeSpecName: "config-data") pod "26147a27-d0db-498e-bf1d-fb496d6a7b48" (UID: "26147a27-d0db-498e-bf1d-fb496d6a7b48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.033876 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26147a27-d0db-498e-bf1d-fb496d6a7b48" (UID: "26147a27-d0db-498e-bf1d-fb496d6a7b48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.113249 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.113301 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrkm5\" (UniqueName: \"kubernetes.io/projected/26147a27-d0db-498e-bf1d-fb496d6a7b48-kube-api-access-mrkm5\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.113322 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.113338 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26147a27-d0db-498e-bf1d-fb496d6a7b48-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.485461 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-rn9xt" event={"ID":"26147a27-d0db-498e-bf1d-fb496d6a7b48","Type":"ContainerDied","Data":"ae3549f4820c063cd76f871e76554cea9d2d893a0f2e1a36623be4d87e460a34"} Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.485499 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae3549f4820c063cd76f871e76554cea9d2d893a0f2e1a36623be4d87e460a34" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.485558 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-rn9xt" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.496649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" event={"ID":"66c97183-25d5-4ca6-bede-619df68d1471","Type":"ContainerStarted","Data":"edeb56d663cc21ffac8620b620d9103115a9fe78e769cf71f65bf0ea0163950a"} Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.496887 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" event={"ID":"66c97183-25d5-4ca6-bede-619df68d1471","Type":"ContainerStarted","Data":"1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a"} Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.530425 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" podStartSLOduration=1.530403325 podStartE2EDuration="1.530403325s" podCreationTimestamp="2026-01-26 22:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:59:56.521041552 +0000 UTC m=+1211.509813054" watchObservedRunningTime="2026-01-26 22:59:56.530403325 +0000 UTC m=+1211.519174837" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.585257 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-655b5d9d44-xt9hv"] Jan 26 22:59:56 crc kubenswrapper[4793]: E0126 22:59:56.585562 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26147a27-d0db-498e-bf1d-fb496d6a7b48" containerName="placement-db-sync" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.585575 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="26147a27-d0db-498e-bf1d-fb496d6a7b48" containerName="placement-db-sync" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.585722 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="26147a27-d0db-498e-bf1d-fb496d6a7b48" containerName="placement-db-sync" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.586505 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.589864 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-fvrsv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.594295 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.594448 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-655b5d9d44-xt9hv"] Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.595977 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.729569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-scripts\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.729618 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-combined-ca-bundle\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.729647 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6n6d\" (UniqueName: \"kubernetes.io/projected/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-kube-api-access-n6n6d\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.729667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-config-data\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.729726 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-logs\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831129 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-logs\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-scripts\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831297 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-combined-ca-bundle\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6n6d\" (UniqueName: \"kubernetes.io/projected/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-kube-api-access-n6n6d\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831346 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-config-data\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.831724 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-logs\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.838173 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-scripts\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.838632 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-combined-ca-bundle\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.844909 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-config-data\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.849730 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6n6d\" (UniqueName: \"kubernetes.io/projected/a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e-kube-api-access-n6n6d\") pod \"placement-655b5d9d44-xt9hv\" (UID: \"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e\") " pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:56 crc kubenswrapper[4793]: I0126 22:59:56.910715 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:57 crc kubenswrapper[4793]: I0126 22:59:57.321035 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-655b5d9d44-xt9hv"] Jan 26 22:59:57 crc kubenswrapper[4793]: W0126 22:59:57.337340 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aa4144_bcdd_4fb8_84f4_fb976c8c4a0e.slice/crio-0b6dc9e7ebd2670d336de62035dc3009cd3fc5aef6a5fa198e37f4c18473285a WatchSource:0}: Error finding container 0b6dc9e7ebd2670d336de62035dc3009cd3fc5aef6a5fa198e37f4c18473285a: Status 404 returned error can't find the container with id 0b6dc9e7ebd2670d336de62035dc3009cd3fc5aef6a5fa198e37f4c18473285a Jan 26 22:59:57 crc kubenswrapper[4793]: I0126 22:59:57.510038 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" event={"ID":"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e","Type":"ContainerStarted","Data":"0b6dc9e7ebd2670d336de62035dc3009cd3fc5aef6a5fa198e37f4c18473285a"} Jan 26 22:59:58 crc kubenswrapper[4793]: I0126 22:59:58.523453 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" event={"ID":"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e","Type":"ContainerStarted","Data":"3395d5f2d3a64863228c8a5a8c3f503786df769bcab98061231efead25feb9b9"} Jan 26 22:59:58 crc kubenswrapper[4793]: I0126 22:59:58.524015 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" event={"ID":"a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e","Type":"ContainerStarted","Data":"5be80f442506a831ab76a7cf49181400847f78f458353f8b9971687e33034154"} Jan 26 22:59:58 crc kubenswrapper[4793]: I0126 22:59:58.524058 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:58 crc kubenswrapper[4793]: I0126 22:59:58.524089 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 22:59:58 crc kubenswrapper[4793]: I0126 22:59:58.557759 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" podStartSLOduration=2.5577364620000003 podStartE2EDuration="2.557736462s" podCreationTimestamp="2026-01-26 22:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 22:59:58.542144545 +0000 UTC m=+1213.530916087" watchObservedRunningTime="2026-01-26 22:59:58.557736462 +0000 UTC m=+1213.546507964" Jan 26 22:59:59 crc kubenswrapper[4793]: I0126 22:59:59.532663 4793 generic.go:334] "Generic (PLEG): container finished" podID="66c97183-25d5-4ca6-bede-619df68d1471" containerID="edeb56d663cc21ffac8620b620d9103115a9fe78e769cf71f65bf0ea0163950a" exitCode=0 Jan 26 22:59:59 crc kubenswrapper[4793]: I0126 22:59:59.532808 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" event={"ID":"66c97183-25d5-4ca6-bede-619df68d1471","Type":"ContainerDied","Data":"edeb56d663cc21ffac8620b620d9103115a9fe78e769cf71f65bf0ea0163950a"} Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.148115 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s"] Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.152694 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.160174 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.163660 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.187627 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s"] Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.286548 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfxr2\" (UniqueName: \"kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.286772 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.286808 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.388938 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfxr2\" (UniqueName: \"kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.389060 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.389105 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.390968 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.396087 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.420927 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfxr2\" (UniqueName: \"kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2\") pod \"collect-profiles-29491140-4wv2s\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.504565 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.908872 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999384 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbbr\" (UniqueName: \"kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999513 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999613 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999680 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:00 crc kubenswrapper[4793]: I0126 23:00:00.999727 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts\") pod \"66c97183-25d5-4ca6-bede-619df68d1471\" (UID: \"66c97183-25d5-4ca6-bede-619df68d1471\") " Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.006246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.006575 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.006819 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr" (OuterVolumeSpecName: "kube-api-access-jbbbr") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "kube-api-access-jbbbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.007019 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts" (OuterVolumeSpecName: "scripts") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.034375 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data" (OuterVolumeSpecName: "config-data") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.044823 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66c97183-25d5-4ca6-bede-619df68d1471" (UID: "66c97183-25d5-4ca6-bede-619df68d1471"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.049686 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s"] Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.102271 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.102635 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.102839 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbbr\" (UniqueName: \"kubernetes.io/projected/66c97183-25d5-4ca6-bede-619df68d1471-kube-api-access-jbbbr\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.103030 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.103230 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.103419 4793 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c97183-25d5-4ca6-bede-619df68d1471-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.554476 4793 generic.go:334] "Generic (PLEG): container finished" podID="8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" containerID="ef3d2d9cccb326a3ea1d8eee537506e17562bf7264a5c79f77fa04751d2cee45" exitCode=0 Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.554540 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" event={"ID":"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89","Type":"ContainerDied","Data":"ef3d2d9cccb326a3ea1d8eee537506e17562bf7264a5c79f77fa04751d2cee45"} Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.554843 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" event={"ID":"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89","Type":"ContainerStarted","Data":"2b632696e958cc87fbdb7bb5241e6abfe3807fc3828296466a72c7d0491bea3b"} Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.557105 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" event={"ID":"66c97183-25d5-4ca6-bede-619df68d1471","Type":"ContainerDied","Data":"1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a"} Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.557135 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfcc06bdfe1a9b55d097850bfefc5646103312d342182b7f72fcc4669bff71a" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.557469 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-5hcwl" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.636502 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-564b65fb54-5sx8h"] Jan 26 23:00:01 crc kubenswrapper[4793]: E0126 23:00:01.636900 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c97183-25d5-4ca6-bede-619df68d1471" containerName="keystone-bootstrap" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.636921 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c97183-25d5-4ca6-bede-619df68d1471" containerName="keystone-bootstrap" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.637129 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c97183-25d5-4ca6-bede-619df68d1471" containerName="keystone-bootstrap" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.637768 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.640116 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.640946 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.641046 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-56gjn" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.641423 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.659892 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-564b65fb54-5sx8h"] Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714581 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-credential-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714674 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-494l6\" (UniqueName: \"kubernetes.io/projected/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-kube-api-access-494l6\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714742 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-scripts\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714769 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-combined-ca-bundle\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714800 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-fernet-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.714863 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-config-data\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-config-data\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816754 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-credential-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816775 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-494l6\" (UniqueName: \"kubernetes.io/projected/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-kube-api-access-494l6\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816823 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-combined-ca-bundle\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816842 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-scripts\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.816869 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-fernet-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.822348 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-credential-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.822421 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-fernet-keys\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.822483 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-combined-ca-bundle\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.824750 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-scripts\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.833285 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-494l6\" (UniqueName: \"kubernetes.io/projected/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-kube-api-access-494l6\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.840228 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348-config-data\") pod \"keystone-564b65fb54-5sx8h\" (UID: \"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348\") " pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:01 crc kubenswrapper[4793]: I0126 23:00:01.958623 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.437449 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-564b65fb54-5sx8h"] Jan 26 23:00:02 crc kubenswrapper[4793]: W0126 23:00:02.446162 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dfd7a46_fa72_4bfe_9f03_7ac1c0fed348.slice/crio-14e3484e2379199840ade5336c646d940ae7e4fa02c40476840153f47c12a31e WatchSource:0}: Error finding container 14e3484e2379199840ade5336c646d940ae7e4fa02c40476840153f47c12a31e: Status 404 returned error can't find the container with id 14e3484e2379199840ade5336c646d940ae7e4fa02c40476840153f47c12a31e Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.564971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" event={"ID":"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348","Type":"ContainerStarted","Data":"14e3484e2379199840ade5336c646d940ae7e4fa02c40476840153f47c12a31e"} Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.842801 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.932873 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfxr2\" (UniqueName: \"kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2\") pod \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.932965 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume\") pod \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.933059 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume\") pod \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\" (UID: \"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89\") " Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.933908 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" (UID: "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.937046 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2" (OuterVolumeSpecName: "kube-api-access-bfxr2") pod "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" (UID: "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89"). InnerVolumeSpecName "kube-api-access-bfxr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:00:02 crc kubenswrapper[4793]: I0126 23:00:02.937537 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" (UID: "8f1bb97b-a1a2-40f7-b79b-8002bcce7c89"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.034626 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfxr2\" (UniqueName: \"kubernetes.io/projected/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-kube-api-access-bfxr2\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.034657 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.034667 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1bb97b-a1a2-40f7-b79b-8002bcce7c89-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.576845 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.576886 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-4wv2s" event={"ID":"8f1bb97b-a1a2-40f7-b79b-8002bcce7c89","Type":"ContainerDied","Data":"2b632696e958cc87fbdb7bb5241e6abfe3807fc3828296466a72c7d0491bea3b"} Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.576939 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b632696e958cc87fbdb7bb5241e6abfe3807fc3828296466a72c7d0491bea3b" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.579573 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" event={"ID":"2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348","Type":"ContainerStarted","Data":"3bb7392c381a55291adae0f08b831cbd66b3755881e2d7a971b72ce159c9c567"} Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.579806 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:03 crc kubenswrapper[4793]: I0126 23:00:03.614091 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" podStartSLOduration=2.613972064 podStartE2EDuration="2.613972064s" podCreationTimestamp="2026-01-26 23:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:00:03.604719984 +0000 UTC m=+1218.593491526" watchObservedRunningTime="2026-01-26 23:00:03.613972064 +0000 UTC m=+1218.602743616" Jan 26 23:00:28 crc kubenswrapper[4793]: I0126 23:00:28.078822 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 23:00:28 crc kubenswrapper[4793]: I0126 23:00:28.087280 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-655b5d9d44-xt9hv" Jan 26 23:00:33 crc kubenswrapper[4793]: I0126 23:00:33.368474 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/keystone-564b65fb54-5sx8h" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.598602 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 26 23:00:36 crc kubenswrapper[4793]: E0126 23:00:36.600404 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" containerName="collect-profiles" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.600423 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" containerName="collect-profiles" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.600596 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1bb97b-a1a2-40f7-b79b-8002bcce7c89" containerName="collect-profiles" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.601044 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.605552 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.605888 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-config-secret" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.606203 4793 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstackclient-openstackclient-dockercfg-p8bsf" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.607725 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.741210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config-secret\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.741283 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.741341 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.741362 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k57t\" (UniqueName: \"kubernetes.io/projected/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-kube-api-access-7k57t\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.842603 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.842685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.842708 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k57t\" (UniqueName: \"kubernetes.io/projected/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-kube-api-access-7k57t\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.842754 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config-secret\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.843784 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.847929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-openstack-config-secret\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.857692 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.874322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k57t\" (UniqueName: \"kubernetes.io/projected/dfe7ef28-7eb1-48c8-b2bb-8990933e8971-kube-api-access-7k57t\") pod \"openstackclient\" (UID: \"dfe7ef28-7eb1-48c8-b2bb-8990933e8971\") " pod="nova-kuttl-default/openstackclient" Jan 26 23:00:36 crc kubenswrapper[4793]: I0126 23:00:36.922961 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 26 23:00:37 crc kubenswrapper[4793]: I0126 23:00:37.193910 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 26 23:00:37 crc kubenswrapper[4793]: I0126 23:00:37.886898 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"dfe7ef28-7eb1-48c8-b2bb-8990933e8971","Type":"ContainerStarted","Data":"5f778100eddf01a756eba7bf0b72af0e7334c772ac8e13f0dd5c47c2475207b6"} Jan 26 23:00:45 crc kubenswrapper[4793]: I0126 23:00:45.949738 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"dfe7ef28-7eb1-48c8-b2bb-8990933e8971","Type":"ContainerStarted","Data":"aa49776f8519ee53d4d7acf0dd08f0be7d10021a30eee7f26524c2163e6ee7bb"} Jan 26 23:00:45 crc kubenswrapper[4793]: I0126 23:00:45.972039 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstackclient" podStartSLOduration=2.193053141 podStartE2EDuration="9.972020649s" podCreationTimestamp="2026-01-26 23:00:36 +0000 UTC" firstStartedPulling="2026-01-26 23:00:37.195953211 +0000 UTC m=+1252.184724723" lastFinishedPulling="2026-01-26 23:00:44.974920719 +0000 UTC m=+1259.963692231" observedRunningTime="2026-01-26 23:00:45.965825905 +0000 UTC m=+1260.954597447" watchObservedRunningTime="2026-01-26 23:00:45.972020649 +0000 UTC m=+1260.960792171" Jan 26 23:00:57 crc kubenswrapper[4793]: I0126 23:00:57.903691 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 23:00:57 crc kubenswrapper[4793]: I0126 23:00:57.904305 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" containerName="manager" containerID="cri-o://27aa43d53cac21ae560dc4dfba6c32cef6066e0e37407cd3e0517161f5497f76" gracePeriod=10 Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.065932 4793 generic.go:334] "Generic (PLEG): container finished" podID="c130e1aa-1f05-45ff-8364-714b79fa7282" containerID="27aa43d53cac21ae560dc4dfba6c32cef6066e0e37407cd3e0517161f5497f76" exitCode=0 Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.065975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" event={"ID":"c130e1aa-1f05-45ff-8364-714b79fa7282","Type":"ContainerDied","Data":"27aa43d53cac21ae560dc4dfba6c32cef6066e0e37407cd3e0517161f5497f76"} Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.434077 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.434899 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" podUID="3a8dce43-2d79-4865-b17b-c73d6809865d" containerName="operator" containerID="cri-o://4ca2ce24aa7792f3f194f47af65c5c2a24088c6bd8e2fe97ed52b89cbf67652d" gracePeriod=10 Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.443419 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-864fr"] Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.444510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.454415 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk"] Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.455354 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.465513 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-index-dockercfg-6sh7v" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.466294 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk"] Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.484597 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-864fr"] Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.535949 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkchx\" (UniqueName: \"kubernetes.io/projected/209b817f-0bec-4d6b-814c-ae2a07913a56-kube-api-access-zkchx\") pod \"nova-operator-controller-manager-5b4fc7b894-6xkrk\" (UID: \"209b817f-0bec-4d6b-814c-ae2a07913a56\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.536017 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqm9\" (UniqueName: \"kubernetes.io/projected/c0a1696e-8859-4708-84f1-57b65d7cc16a-kube-api-access-wqqm9\") pod \"nova-operator-index-864fr\" (UID: \"c0a1696e-8859-4708-84f1-57b65d7cc16a\") " pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.603529 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.637070 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkchx\" (UniqueName: \"kubernetes.io/projected/209b817f-0bec-4d6b-814c-ae2a07913a56-kube-api-access-zkchx\") pod \"nova-operator-controller-manager-5b4fc7b894-6xkrk\" (UID: \"209b817f-0bec-4d6b-814c-ae2a07913a56\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.637139 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqm9\" (UniqueName: \"kubernetes.io/projected/c0a1696e-8859-4708-84f1-57b65d7cc16a-kube-api-access-wqqm9\") pod \"nova-operator-index-864fr\" (UID: \"c0a1696e-8859-4708-84f1-57b65d7cc16a\") " pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.673855 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkchx\" (UniqueName: \"kubernetes.io/projected/209b817f-0bec-4d6b-814c-ae2a07913a56-kube-api-access-zkchx\") pod \"nova-operator-controller-manager-5b4fc7b894-6xkrk\" (UID: \"209b817f-0bec-4d6b-814c-ae2a07913a56\") " pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.687790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqm9\" (UniqueName: \"kubernetes.io/projected/c0a1696e-8859-4708-84f1-57b65d7cc16a-kube-api-access-wqqm9\") pod \"nova-operator-index-864fr\" (UID: \"c0a1696e-8859-4708-84f1-57b65d7cc16a\") " pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.737882 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gwc4\" (UniqueName: \"kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4\") pod \"c130e1aa-1f05-45ff-8364-714b79fa7282\" (UID: \"c130e1aa-1f05-45ff-8364-714b79fa7282\") " Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.746493 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4" (OuterVolumeSpecName: "kube-api-access-7gwc4") pod "c130e1aa-1f05-45ff-8364-714b79fa7282" (UID: "c130e1aa-1f05-45ff-8364-714b79fa7282"). InnerVolumeSpecName "kube-api-access-7gwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.839187 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gwc4\" (UniqueName: \"kubernetes.io/projected/c130e1aa-1f05-45ff-8364-714b79fa7282-kube-api-access-7gwc4\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.912059 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:00:58 crc kubenswrapper[4793]: I0126 23:00:58.933873 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.093676 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" event={"ID":"c130e1aa-1f05-45ff-8364-714b79fa7282","Type":"ContainerDied","Data":"a5c867b9927299be310ccbc0cb616dae9ad68f5d7f00ed7bfa18b82b1aeee089"} Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.093700 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.093946 4793 scope.go:117] "RemoveContainer" containerID="27aa43d53cac21ae560dc4dfba6c32cef6066e0e37407cd3e0517161f5497f76" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.097014 4793 generic.go:334] "Generic (PLEG): container finished" podID="3a8dce43-2d79-4865-b17b-c73d6809865d" containerID="4ca2ce24aa7792f3f194f47af65c5c2a24088c6bd8e2fe97ed52b89cbf67652d" exitCode=0 Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.097057 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" event={"ID":"3a8dce43-2d79-4865-b17b-c73d6809865d","Type":"ContainerDied","Data":"4ca2ce24aa7792f3f194f47af65c5c2a24088c6bd8e2fe97ed52b89cbf67652d"} Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.097079 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" event={"ID":"3a8dce43-2d79-4865-b17b-c73d6809865d","Type":"ContainerDied","Data":"e948222ad969867dfca2ac3068c15d29f9c21e465ef410e50da6d8c9a179da98"} Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.097090 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e948222ad969867dfca2ac3068c15d29f9c21e465ef410e50da6d8c9a179da98" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.118010 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.142842 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.152365 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-55w8b"] Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.249069 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnkk7\" (UniqueName: \"kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7\") pod \"3a8dce43-2d79-4865-b17b-c73d6809865d\" (UID: \"3a8dce43-2d79-4865-b17b-c73d6809865d\") " Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.259138 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7" (OuterVolumeSpecName: "kube-api-access-gnkk7") pod "3a8dce43-2d79-4865-b17b-c73d6809865d" (UID: "3a8dce43-2d79-4865-b17b-c73d6809865d"). InnerVolumeSpecName "kube-api-access-gnkk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.350659 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnkk7\" (UniqueName: \"kubernetes.io/projected/3a8dce43-2d79-4865-b17b-c73d6809865d-kube-api-access-gnkk7\") on node \"crc\" DevicePath \"\"" Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.433634 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-864fr"] Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.506460 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk"] Jan 26 23:00:59 crc kubenswrapper[4793]: I0126 23:00:59.773272 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" path="/var/lib/kubelet/pods/c130e1aa-1f05-45ff-8364-714b79fa7282/volumes" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.105136 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" event={"ID":"209b817f-0bec-4d6b-814c-ae2a07913a56","Type":"ContainerStarted","Data":"7158bf42d356c8ab937717c85403f66c562ae7cc47f3757935b34b5c54e24a0d"} Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.105220 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" event={"ID":"209b817f-0bec-4d6b-814c-ae2a07913a56","Type":"ContainerStarted","Data":"8af648461e636b5cfbfd62ec09835634b28050a89febd9aa985dd87f13b014d6"} Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.105239 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.106746 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-864fr" event={"ID":"c0a1696e-8859-4708-84f1-57b65d7cc16a","Type":"ContainerStarted","Data":"9bc06a03cfa59b765710ac34e1759e4ed6d47a27f44674e2a5d6fc7078255302"} Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.106769 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-864fr" event={"ID":"c0a1696e-8859-4708-84f1-57b65d7cc16a","Type":"ContainerStarted","Data":"9617a2561dd7c9268bc956e249d81fe520a078ea6f6a5e9bdbac901eeb86922c"} Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.107726 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.128815 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" podStartSLOduration=2.128793276 podStartE2EDuration="2.128793276s" podCreationTimestamp="2026-01-26 23:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:01:00.120313498 +0000 UTC m=+1275.109085010" watchObservedRunningTime="2026-01-26 23:01:00.128793276 +0000 UTC m=+1275.117564788" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.138935 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-cron-29491141-mgww5"] Jan 26 23:01:00 crc kubenswrapper[4793]: E0126 23:01:00.139306 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8dce43-2d79-4865-b17b-c73d6809865d" containerName="operator" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.139327 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8dce43-2d79-4865-b17b-c73d6809865d" containerName="operator" Jan 26 23:01:00 crc kubenswrapper[4793]: E0126 23:01:00.139338 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" containerName="manager" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.139346 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" containerName="manager" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.139534 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c130e1aa-1f05-45ff-8364-714b79fa7282" containerName="manager" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.139558 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8dce43-2d79-4865-b17b-c73d6809865d" containerName="operator" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.140168 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.148265 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.178485 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-844f6594fb-khtqs"] Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.186415 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-cron-29491141-mgww5"] Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.194246 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-864fr" podStartSLOduration=1.992319285 podStartE2EDuration="2.194226163s" podCreationTimestamp="2026-01-26 23:00:58 +0000 UTC" firstStartedPulling="2026-01-26 23:00:59.428887409 +0000 UTC m=+1274.417658921" lastFinishedPulling="2026-01-26 23:00:59.630794287 +0000 UTC m=+1274.619565799" observedRunningTime="2026-01-26 23:01:00.156065912 +0000 UTC m=+1275.144837444" watchObservedRunningTime="2026-01-26 23:01:00.194226163 +0000 UTC m=+1275.182997675" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.262535 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.262600 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhtzx\" (UniqueName: \"kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.262653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.262698 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.363721 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.363869 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.364908 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhtzx\" (UniqueName: \"kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.364961 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.370657 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.381875 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.390042 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.393151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhtzx\" (UniqueName: \"kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx\") pod \"keystone-cron-29491141-mgww5\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.454242 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:00 crc kubenswrapper[4793]: I0126 23:01:00.746140 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-cron-29491141-mgww5"] Jan 26 23:01:01 crc kubenswrapper[4793]: I0126 23:01:01.117312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" event={"ID":"86415a48-3c2d-430b-a38c-f43e1c518984","Type":"ContainerStarted","Data":"510edd1d58864f0ad4c2e90e3e555c8b700644d5d6c6bb95477b04e71a7bf309"} Jan 26 23:01:01 crc kubenswrapper[4793]: I0126 23:01:01.117364 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" event={"ID":"86415a48-3c2d-430b-a38c-f43e1c518984","Type":"ContainerStarted","Data":"f1418c6e20b5c23b9316cce52eaa87883fcaf7b57fc4c52015ff85562e5cf3bb"} Jan 26 23:01:01 crc kubenswrapper[4793]: I0126 23:01:01.135414 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" podStartSLOduration=1.135394303 podStartE2EDuration="1.135394303s" podCreationTimestamp="2026-01-26 23:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:01:01.133390496 +0000 UTC m=+1276.122162008" watchObservedRunningTime="2026-01-26 23:01:01.135394303 +0000 UTC m=+1276.124165815" Jan 26 23:01:01 crc kubenswrapper[4793]: I0126 23:01:01.770958 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8dce43-2d79-4865-b17b-c73d6809865d" path="/var/lib/kubelet/pods/3a8dce43-2d79-4865-b17b-c73d6809865d/volumes" Jan 26 23:01:03 crc kubenswrapper[4793]: I0126 23:01:03.137067 4793 generic.go:334] "Generic (PLEG): container finished" podID="86415a48-3c2d-430b-a38c-f43e1c518984" containerID="510edd1d58864f0ad4c2e90e3e555c8b700644d5d6c6bb95477b04e71a7bf309" exitCode=0 Jan 26 23:01:03 crc kubenswrapper[4793]: I0126 23:01:03.137119 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" event={"ID":"86415a48-3c2d-430b-a38c-f43e1c518984","Type":"ContainerDied","Data":"510edd1d58864f0ad4c2e90e3e555c8b700644d5d6c6bb95477b04e71a7bf309"} Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.498172 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.651183 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys\") pod \"86415a48-3c2d-430b-a38c-f43e1c518984\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.651345 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle\") pod \"86415a48-3c2d-430b-a38c-f43e1c518984\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.651383 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhtzx\" (UniqueName: \"kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx\") pod \"86415a48-3c2d-430b-a38c-f43e1c518984\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.651464 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data\") pod \"86415a48-3c2d-430b-a38c-f43e1c518984\" (UID: \"86415a48-3c2d-430b-a38c-f43e1c518984\") " Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.657070 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "86415a48-3c2d-430b-a38c-f43e1c518984" (UID: "86415a48-3c2d-430b-a38c-f43e1c518984"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.659780 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx" (OuterVolumeSpecName: "kube-api-access-xhtzx") pod "86415a48-3c2d-430b-a38c-f43e1c518984" (UID: "86415a48-3c2d-430b-a38c-f43e1c518984"). InnerVolumeSpecName "kube-api-access-xhtzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.674228 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86415a48-3c2d-430b-a38c-f43e1c518984" (UID: "86415a48-3c2d-430b-a38c-f43e1c518984"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.694660 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data" (OuterVolumeSpecName: "config-data") pod "86415a48-3c2d-430b-a38c-f43e1c518984" (UID: "86415a48-3c2d-430b-a38c-f43e1c518984"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.753000 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.753034 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.753044 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86415a48-3c2d-430b-a38c-f43e1c518984-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:04 crc kubenswrapper[4793]: I0126 23:01:04.753054 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhtzx\" (UniqueName: \"kubernetes.io/projected/86415a48-3c2d-430b-a38c-f43e1c518984-kube-api-access-xhtzx\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:05 crc kubenswrapper[4793]: I0126 23:01:05.151102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" event={"ID":"86415a48-3c2d-430b-a38c-f43e1c518984","Type":"ContainerDied","Data":"f1418c6e20b5c23b9316cce52eaa87883fcaf7b57fc4c52015ff85562e5cf3bb"} Jan 26 23:01:05 crc kubenswrapper[4793]: I0126 23:01:05.151485 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1418c6e20b5c23b9316cce52eaa87883fcaf7b57fc4c52015ff85562e5cf3bb" Jan 26 23:01:05 crc kubenswrapper[4793]: I0126 23:01:05.151173 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29491141-mgww5" Jan 26 23:01:08 crc kubenswrapper[4793]: I0126 23:01:08.912774 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:01:08 crc kubenswrapper[4793]: I0126 23:01:08.913319 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:01:08 crc kubenswrapper[4793]: I0126 23:01:08.944979 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5b4fc7b894-6xkrk" Jan 26 23:01:08 crc kubenswrapper[4793]: I0126 23:01:08.964641 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:01:09 crc kubenswrapper[4793]: I0126 23:01:09.215979 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-index-864fr" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.238149 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj"] Jan 26 23:01:15 crc kubenswrapper[4793]: E0126 23:01:15.238959 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86415a48-3c2d-430b-a38c-f43e1c518984" containerName="keystone-cron" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.238973 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="86415a48-3c2d-430b-a38c-f43e1c518984" containerName="keystone-cron" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.239126 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="86415a48-3c2d-430b-a38c-f43e1c518984" containerName="keystone-cron" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.240136 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.242201 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-97km8" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.254432 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj"] Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.306541 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.306636 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jctk4\" (UniqueName: \"kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.306804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.408476 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.408590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.408658 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jctk4\" (UniqueName: \"kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.408998 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.409036 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.426766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jctk4\" (UniqueName: \"kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4\") pod \"03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:15 crc kubenswrapper[4793]: I0126 23:01:15.580996 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:16 crc kubenswrapper[4793]: I0126 23:01:16.047953 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj"] Jan 26 23:01:16 crc kubenswrapper[4793]: I0126 23:01:16.256584 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" event={"ID":"e023476b-0850-4e37-97da-0cd1a5d9425f","Type":"ContainerStarted","Data":"223acbb2112ca2a3d9ad487a70b7c917e0e1a36ebc9f6ec9d51871ef60679ac3"} Jan 26 23:01:17 crc kubenswrapper[4793]: I0126 23:01:17.264824 4793 generic.go:334] "Generic (PLEG): container finished" podID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerID="a4745030e8ec4c487c22710d1f886bb405f34f7e81e13a2cf7aa864f4c721a34" exitCode=0 Jan 26 23:01:17 crc kubenswrapper[4793]: I0126 23:01:17.265177 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" event={"ID":"e023476b-0850-4e37-97da-0cd1a5d9425f","Type":"ContainerDied","Data":"a4745030e8ec4c487c22710d1f886bb405f34f7e81e13a2cf7aa864f4c721a34"} Jan 26 23:01:18 crc kubenswrapper[4793]: I0126 23:01:18.322719 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:01:18 crc kubenswrapper[4793]: I0126 23:01:18.323084 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:01:24 crc kubenswrapper[4793]: I0126 23:01:24.327977 4793 generic.go:334] "Generic (PLEG): container finished" podID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerID="77fd0c333b126c4a68f3d46a8f89c8cc529d2716cd15ab65a4dd0c736b61937a" exitCode=0 Jan 26 23:01:24 crc kubenswrapper[4793]: I0126 23:01:24.328148 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" event={"ID":"e023476b-0850-4e37-97da-0cd1a5d9425f","Type":"ContainerDied","Data":"77fd0c333b126c4a68f3d46a8f89c8cc529d2716cd15ab65a4dd0c736b61937a"} Jan 26 23:01:25 crc kubenswrapper[4793]: I0126 23:01:25.337517 4793 generic.go:334] "Generic (PLEG): container finished" podID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerID="fa3aa6e996ad3b391dbd7651793d78bb8ab394e82ddca76a2e50f865b1270576" exitCode=0 Jan 26 23:01:25 crc kubenswrapper[4793]: I0126 23:01:25.337561 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" event={"ID":"e023476b-0850-4e37-97da-0cd1a5d9425f","Type":"ContainerDied","Data":"fa3aa6e996ad3b391dbd7651793d78bb8ab394e82ddca76a2e50f865b1270576"} Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.662236 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.783693 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle\") pod \"e023476b-0850-4e37-97da-0cd1a5d9425f\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.783796 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util\") pod \"e023476b-0850-4e37-97da-0cd1a5d9425f\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.784296 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jctk4\" (UniqueName: \"kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4\") pod \"e023476b-0850-4e37-97da-0cd1a5d9425f\" (UID: \"e023476b-0850-4e37-97da-0cd1a5d9425f\") " Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.786312 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle" (OuterVolumeSpecName: "bundle") pod "e023476b-0850-4e37-97da-0cd1a5d9425f" (UID: "e023476b-0850-4e37-97da-0cd1a5d9425f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.790730 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4" (OuterVolumeSpecName: "kube-api-access-jctk4") pod "e023476b-0850-4e37-97da-0cd1a5d9425f" (UID: "e023476b-0850-4e37-97da-0cd1a5d9425f"). InnerVolumeSpecName "kube-api-access-jctk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.798524 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util" (OuterVolumeSpecName: "util") pod "e023476b-0850-4e37-97da-0cd1a5d9425f" (UID: "e023476b-0850-4e37-97da-0cd1a5d9425f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.886398 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jctk4\" (UniqueName: \"kubernetes.io/projected/e023476b-0850-4e37-97da-0cd1a5d9425f-kube-api-access-jctk4\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.886438 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:26 crc kubenswrapper[4793]: I0126 23:01:26.886453 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e023476b-0850-4e37-97da-0cd1a5d9425f-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:01:27 crc kubenswrapper[4793]: I0126 23:01:27.356737 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" event={"ID":"e023476b-0850-4e37-97da-0cd1a5d9425f","Type":"ContainerDied","Data":"223acbb2112ca2a3d9ad487a70b7c917e0e1a36ebc9f6ec9d51871ef60679ac3"} Jan 26 23:01:27 crc kubenswrapper[4793]: I0126 23:01:27.356806 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj" Jan 26 23:01:27 crc kubenswrapper[4793]: I0126 23:01:27.356813 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="223acbb2112ca2a3d9ad487a70b7c917e0e1a36ebc9f6ec9d51871ef60679ac3" Jan 26 23:01:48 crc kubenswrapper[4793]: I0126 23:01:48.322604 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:01:48 crc kubenswrapper[4793]: I0126 23:01:48.323158 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.322621 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.323250 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.323319 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.324223 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.324294 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3" gracePeriod=600 Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.741781 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3" exitCode=0 Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.741860 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3"} Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.742151 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerStarted","Data":"b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497"} Jan 26 23:02:18 crc kubenswrapper[4793]: I0126 23:02:18.742174 4793 scope.go:117] "RemoveContainer" containerID="0f0bcae8737d5ff963297bee9670c968431e1a6c097e0d397cb380eea5515587" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.007147 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:02:43 crc kubenswrapper[4793]: E0126 23:02:43.007967 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="util" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.007980 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="util" Jan 26 23:02:43 crc kubenswrapper[4793]: E0126 23:02:43.008017 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="extract" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.008023 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="extract" Jan 26 23:02:43 crc kubenswrapper[4793]: E0126 23:02:43.008056 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="pull" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.008062 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="pull" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.008227 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e023476b-0850-4e37-97da-0cd1a5d9425f" containerName="extract" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.009220 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.031254 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.105835 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.105928 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.106041 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vfjt\" (UniqueName: \"kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.207854 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.207931 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.207990 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vfjt\" (UniqueName: \"kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.208484 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.208628 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.247340 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vfjt\" (UniqueName: \"kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt\") pod \"redhat-operators-4qxcn\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.332130 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.852176 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:02:43 crc kubenswrapper[4793]: I0126 23:02:43.935114 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerStarted","Data":"59673bd85054904a5f4197ca518a84e664721612c44bea1adaa36581d08d7ad7"} Jan 26 23:02:44 crc kubenswrapper[4793]: I0126 23:02:44.943989 4793 generic.go:334] "Generic (PLEG): container finished" podID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerID="10b042a5c8dd85bb9307f96395ba6d325886a51a7c87d21538e763bc64424eea" exitCode=0 Jan 26 23:02:44 crc kubenswrapper[4793]: I0126 23:02:44.944074 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerDied","Data":"10b042a5c8dd85bb9307f96395ba6d325886a51a7c87d21538e763bc64424eea"} Jan 26 23:02:44 crc kubenswrapper[4793]: I0126 23:02:44.946860 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:02:46 crc kubenswrapper[4793]: I0126 23:02:46.964895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerStarted","Data":"3ca505a73234ac600a4236422b0f13f611116978422e19c706c0dab14a661992"} Jan 26 23:02:47 crc kubenswrapper[4793]: I0126 23:02:47.975350 4793 generic.go:334] "Generic (PLEG): container finished" podID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerID="3ca505a73234ac600a4236422b0f13f611116978422e19c706c0dab14a661992" exitCode=0 Jan 26 23:02:47 crc kubenswrapper[4793]: I0126 23:02:47.975399 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerDied","Data":"3ca505a73234ac600a4236422b0f13f611116978422e19c706c0dab14a661992"} Jan 26 23:02:49 crc kubenswrapper[4793]: I0126 23:02:49.992926 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerStarted","Data":"5b5dd36f2813a9da99774f2bcbed5dcca352a0d6af2cd6662df6d38589a8e5a2"} Jan 26 23:02:50 crc kubenswrapper[4793]: I0126 23:02:50.038264 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4qxcn" podStartSLOduration=3.976172911 podStartE2EDuration="8.038235121s" podCreationTimestamp="2026-01-26 23:02:42 +0000 UTC" firstStartedPulling="2026-01-26 23:02:44.946666689 +0000 UTC m=+1379.935438201" lastFinishedPulling="2026-01-26 23:02:49.008728899 +0000 UTC m=+1383.997500411" observedRunningTime="2026-01-26 23:02:50.030587366 +0000 UTC m=+1385.019358888" watchObservedRunningTime="2026-01-26 23:02:50.038235121 +0000 UTC m=+1385.027006633" Jan 26 23:02:53 crc kubenswrapper[4793]: I0126 23:02:53.332399 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:53 crc kubenswrapper[4793]: I0126 23:02:53.332769 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:02:54 crc kubenswrapper[4793]: I0126 23:02:54.384207 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4qxcn" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="registry-server" probeResult="failure" output=< Jan 26 23:02:54 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 26 23:02:54 crc kubenswrapper[4793]: > Jan 26 23:03:03 crc kubenswrapper[4793]: I0126 23:03:03.395678 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:03:03 crc kubenswrapper[4793]: I0126 23:03:03.436068 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:03:05 crc kubenswrapper[4793]: I0126 23:03:05.805220 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:03:05 crc kubenswrapper[4793]: I0126 23:03:05.805929 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4qxcn" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="registry-server" containerID="cri-o://5b5dd36f2813a9da99774f2bcbed5dcca352a0d6af2cd6662df6d38589a8e5a2" gracePeriod=2 Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.132812 4793 generic.go:334] "Generic (PLEG): container finished" podID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerID="5b5dd36f2813a9da99774f2bcbed5dcca352a0d6af2cd6662df6d38589a8e5a2" exitCode=0 Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.132835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerDied","Data":"5b5dd36f2813a9da99774f2bcbed5dcca352a0d6af2cd6662df6d38589a8e5a2"} Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.213443 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.269546 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vfjt\" (UniqueName: \"kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt\") pod \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.269681 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content\") pod \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.269792 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities\") pod \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\" (UID: \"f3348a11-eb42-42bd-bf6e-8d1f08402fc2\") " Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.270679 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities" (OuterVolumeSpecName: "utilities") pod "f3348a11-eb42-42bd-bf6e-8d1f08402fc2" (UID: "f3348a11-eb42-42bd-bf6e-8d1f08402fc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.275964 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt" (OuterVolumeSpecName: "kube-api-access-7vfjt") pod "f3348a11-eb42-42bd-bf6e-8d1f08402fc2" (UID: "f3348a11-eb42-42bd-bf6e-8d1f08402fc2"). InnerVolumeSpecName "kube-api-access-7vfjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.371290 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.371320 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vfjt\" (UniqueName: \"kubernetes.io/projected/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-kube-api-access-7vfjt\") on node \"crc\" DevicePath \"\"" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.393512 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3348a11-eb42-42bd-bf6e-8d1f08402fc2" (UID: "f3348a11-eb42-42bd-bf6e-8d1f08402fc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:03:06 crc kubenswrapper[4793]: I0126 23:03:06.472674 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3348a11-eb42-42bd-bf6e-8d1f08402fc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.141587 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qxcn" event={"ID":"f3348a11-eb42-42bd-bf6e-8d1f08402fc2","Type":"ContainerDied","Data":"59673bd85054904a5f4197ca518a84e664721612c44bea1adaa36581d08d7ad7"} Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.141636 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qxcn" Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.141640 4793 scope.go:117] "RemoveContainer" containerID="5b5dd36f2813a9da99774f2bcbed5dcca352a0d6af2cd6662df6d38589a8e5a2" Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.163540 4793 scope.go:117] "RemoveContainer" containerID="3ca505a73234ac600a4236422b0f13f611116978422e19c706c0dab14a661992" Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.172499 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.184169 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4qxcn"] Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.200752 4793 scope.go:117] "RemoveContainer" containerID="10b042a5c8dd85bb9307f96395ba6d325886a51a7c87d21538e763bc64424eea" Jan 26 23:03:07 crc kubenswrapper[4793]: I0126 23:03:07.773545 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" path="/var/lib/kubelet/pods/f3348a11-eb42-42bd-bf6e-8d1f08402fc2/volumes" Jan 26 23:03:46 crc kubenswrapper[4793]: I0126 23:03:46.471359 4793 scope.go:117] "RemoveContainer" containerID="4ca2ce24aa7792f3f194f47af65c5c2a24088c6bd8e2fe97ed52b89cbf67652d" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.632125 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:08 crc kubenswrapper[4793]: E0126 23:04:08.632983 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="extract-content" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.632997 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="extract-content" Jan 26 23:04:08 crc kubenswrapper[4793]: E0126 23:04:08.633053 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="registry-server" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.633060 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="registry-server" Jan 26 23:04:08 crc kubenswrapper[4793]: E0126 23:04:08.633079 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="extract-utilities" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.633086 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="extract-utilities" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.633243 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3348a11-eb42-42bd-bf6e-8d1f08402fc2" containerName="registry-server" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.634283 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.641443 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.689265 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.689326 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbc84\" (UniqueName: \"kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.689462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.791251 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.791610 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.791760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbc84\" (UniqueName: \"kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.791778 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.792211 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.813392 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbc84\" (UniqueName: \"kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84\") pod \"certified-operators-7b2qz\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:08 crc kubenswrapper[4793]: I0126 23:04:08.980175 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:09 crc kubenswrapper[4793]: I0126 23:04:09.575579 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:09 crc kubenswrapper[4793]: I0126 23:04:09.627834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerStarted","Data":"699fc96e963d34aae177ddd3239e0f6efefb165d36950a2ff83786b3615f0891"} Jan 26 23:04:10 crc kubenswrapper[4793]: I0126 23:04:10.638003 4793 generic.go:334] "Generic (PLEG): container finished" podID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerID="2ebbfbd36a4f00a429af72e8bd75253733316bd4275f445fe8f46a09b8506eac" exitCode=0 Jan 26 23:04:10 crc kubenswrapper[4793]: I0126 23:04:10.638112 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerDied","Data":"2ebbfbd36a4f00a429af72e8bd75253733316bd4275f445fe8f46a09b8506eac"} Jan 26 23:04:13 crc kubenswrapper[4793]: I0126 23:04:13.663419 4793 generic.go:334] "Generic (PLEG): container finished" podID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerID="7959fa44827f66b3423528084a9583b377c547bf8bebf66abccbbc5d445f2a11" exitCode=0 Jan 26 23:04:13 crc kubenswrapper[4793]: I0126 23:04:13.663480 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerDied","Data":"7959fa44827f66b3423528084a9583b377c547bf8bebf66abccbbc5d445f2a11"} Jan 26 23:04:16 crc kubenswrapper[4793]: I0126 23:04:16.710860 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerStarted","Data":"64c4b69f3118bd8cf32760e7d3938d902fd5af9d994b161517334e4edf3ee7a2"} Jan 26 23:04:16 crc kubenswrapper[4793]: I0126 23:04:16.737402 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7b2qz" podStartSLOduration=4.047006454 podStartE2EDuration="8.73738343s" podCreationTimestamp="2026-01-26 23:04:08 +0000 UTC" firstStartedPulling="2026-01-26 23:04:10.639629328 +0000 UTC m=+1465.628400850" lastFinishedPulling="2026-01-26 23:04:15.330006314 +0000 UTC m=+1470.318777826" observedRunningTime="2026-01-26 23:04:16.733153725 +0000 UTC m=+1471.721925247" watchObservedRunningTime="2026-01-26 23:04:16.73738343 +0000 UTC m=+1471.726154942" Jan 26 23:04:18 crc kubenswrapper[4793]: I0126 23:04:18.322997 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:04:18 crc kubenswrapper[4793]: I0126 23:04:18.323095 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:04:18 crc kubenswrapper[4793]: I0126 23:04:18.982087 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:18 crc kubenswrapper[4793]: I0126 23:04:18.982658 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:19 crc kubenswrapper[4793]: I0126 23:04:19.033507 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:20 crc kubenswrapper[4793]: I0126 23:04:20.805264 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:20 crc kubenswrapper[4793]: I0126 23:04:20.863508 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:22 crc kubenswrapper[4793]: I0126 23:04:22.755857 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7b2qz" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="registry-server" containerID="cri-o://64c4b69f3118bd8cf32760e7d3938d902fd5af9d994b161517334e4edf3ee7a2" gracePeriod=2 Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.766294 4793 generic.go:334] "Generic (PLEG): container finished" podID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerID="64c4b69f3118bd8cf32760e7d3938d902fd5af9d994b161517334e4edf3ee7a2" exitCode=0 Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.769716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerDied","Data":"64c4b69f3118bd8cf32760e7d3938d902fd5af9d994b161517334e4edf3ee7a2"} Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.832775 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.943249 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities\") pod \"9e05e40a-7d6e-4928-ae25-a286d1d55532\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.943384 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content\") pod \"9e05e40a-7d6e-4928-ae25-a286d1d55532\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.943472 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbc84\" (UniqueName: \"kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84\") pod \"9e05e40a-7d6e-4928-ae25-a286d1d55532\" (UID: \"9e05e40a-7d6e-4928-ae25-a286d1d55532\") " Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.944245 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities" (OuterVolumeSpecName: "utilities") pod "9e05e40a-7d6e-4928-ae25-a286d1d55532" (UID: "9e05e40a-7d6e-4928-ae25-a286d1d55532"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:04:23 crc kubenswrapper[4793]: I0126 23:04:23.951542 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84" (OuterVolumeSpecName: "kube-api-access-cbc84") pod "9e05e40a-7d6e-4928-ae25-a286d1d55532" (UID: "9e05e40a-7d6e-4928-ae25-a286d1d55532"). InnerVolumeSpecName "kube-api-access-cbc84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.045673 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbc84\" (UniqueName: \"kubernetes.io/projected/9e05e40a-7d6e-4928-ae25-a286d1d55532-kube-api-access-cbc84\") on node \"crc\" DevicePath \"\"" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.045708 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.776061 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b2qz" event={"ID":"9e05e40a-7d6e-4928-ae25-a286d1d55532","Type":"ContainerDied","Data":"699fc96e963d34aae177ddd3239e0f6efefb165d36950a2ff83786b3615f0891"} Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.776127 4793 scope.go:117] "RemoveContainer" containerID="64c4b69f3118bd8cf32760e7d3938d902fd5af9d994b161517334e4edf3ee7a2" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.776131 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b2qz" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.795450 4793 scope.go:117] "RemoveContainer" containerID="7959fa44827f66b3423528084a9583b377c547bf8bebf66abccbbc5d445f2a11" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.811511 4793 scope.go:117] "RemoveContainer" containerID="2ebbfbd36a4f00a429af72e8bd75253733316bd4275f445fe8f46a09b8506eac" Jan 26 23:04:24 crc kubenswrapper[4793]: I0126 23:04:24.967070 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e05e40a-7d6e-4928-ae25-a286d1d55532" (UID: "9e05e40a-7d6e-4928-ae25-a286d1d55532"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:04:25 crc kubenswrapper[4793]: I0126 23:04:25.060034 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e05e40a-7d6e-4928-ae25-a286d1d55532-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:04:25 crc kubenswrapper[4793]: I0126 23:04:25.109805 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:25 crc kubenswrapper[4793]: I0126 23:04:25.121064 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7b2qz"] Jan 26 23:04:25 crc kubenswrapper[4793]: I0126 23:04:25.770858 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" path="/var/lib/kubelet/pods/9e05e40a-7d6e-4928-ae25-a286d1d55532/volumes" Jan 26 23:04:48 crc kubenswrapper[4793]: I0126 23:04:48.322583 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:04:48 crc kubenswrapper[4793]: I0126 23:04:48.323237 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:05:18 crc kubenswrapper[4793]: I0126 23:05:18.322461 4793 patch_prober.go:28] interesting pod/machine-config-daemon-5htjl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:05:18 crc kubenswrapper[4793]: I0126 23:05:18.323050 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:05:18 crc kubenswrapper[4793]: I0126 23:05:18.323098 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" Jan 26 23:05:18 crc kubenswrapper[4793]: I0126 23:05:18.323779 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497"} pod="openshift-machine-config-operator/machine-config-daemon-5htjl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:05:18 crc kubenswrapper[4793]: I0126 23:05:18.323838 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerName="machine-config-daemon" containerID="cri-o://b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" gracePeriod=600 Jan 26 23:05:18 crc kubenswrapper[4793]: E0126 23:05:18.590629 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:05:19 crc kubenswrapper[4793]: I0126 23:05:19.186751 4793 generic.go:334] "Generic (PLEG): container finished" podID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" exitCode=0 Jan 26 23:05:19 crc kubenswrapper[4793]: I0126 23:05:19.186795 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" event={"ID":"22a78b43-c8a5-48e0-8fe3-89bc7b449391","Type":"ContainerDied","Data":"b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497"} Jan 26 23:05:19 crc kubenswrapper[4793]: I0126 23:05:19.186825 4793 scope.go:117] "RemoveContainer" containerID="c6d612632a90ff0cb1b4809ea801026e4879766ff5fb99a4ea1a8127b7cf2cc3" Jan 26 23:05:19 crc kubenswrapper[4793]: I0126 23:05:19.187585 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:05:19 crc kubenswrapper[4793]: E0126 23:05:19.188082 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:05:32 crc kubenswrapper[4793]: I0126 23:05:32.761509 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:05:32 crc kubenswrapper[4793]: E0126 23:05:32.762270 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.573624 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:36 crc kubenswrapper[4793]: E0126 23:05:36.574245 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="extract-utilities" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.574260 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="extract-utilities" Jan 26 23:05:36 crc kubenswrapper[4793]: E0126 23:05:36.574285 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="extract-content" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.574292 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="extract-content" Jan 26 23:05:36 crc kubenswrapper[4793]: E0126 23:05:36.574313 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="registry-server" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.574319 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="registry-server" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.574467 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e05e40a-7d6e-4928-ae25-a286d1d55532" containerName="registry-server" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.575466 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.599311 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.686870 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.688049 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvths\" (UniqueName: \"kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.688125 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.789908 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvths\" (UniqueName: \"kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.789988 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.790097 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.790619 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.790641 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.809380 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvths\" (UniqueName: \"kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths\") pod \"community-operators-n62d9\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:36 crc kubenswrapper[4793]: I0126 23:05:36.896548 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:37 crc kubenswrapper[4793]: I0126 23:05:37.383602 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:37 crc kubenswrapper[4793]: I0126 23:05:37.963713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerStarted","Data":"b2c7be3b64f188ea94ff64c5424b363e760f760a871b1a440341ca934af4e468"} Jan 26 23:05:38 crc kubenswrapper[4793]: I0126 23:05:38.971706 4793 generic.go:334] "Generic (PLEG): container finished" podID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerID="2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842" exitCode=0 Jan 26 23:05:38 crc kubenswrapper[4793]: I0126 23:05:38.971750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerDied","Data":"2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842"} Jan 26 23:05:43 crc kubenswrapper[4793]: I0126 23:05:43.004558 4793 generic.go:334] "Generic (PLEG): container finished" podID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerID="a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8" exitCode=0 Jan 26 23:05:43 crc kubenswrapper[4793]: I0126 23:05:43.004625 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerDied","Data":"a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8"} Jan 26 23:05:44 crc kubenswrapper[4793]: I0126 23:05:44.014507 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerStarted","Data":"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b"} Jan 26 23:05:44 crc kubenswrapper[4793]: I0126 23:05:44.040085 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n62d9" podStartSLOduration=3.238608182 podStartE2EDuration="8.040065515s" podCreationTimestamp="2026-01-26 23:05:36 +0000 UTC" firstStartedPulling="2026-01-26 23:05:38.973451499 +0000 UTC m=+1553.962223011" lastFinishedPulling="2026-01-26 23:05:43.774908822 +0000 UTC m=+1558.763680344" observedRunningTime="2026-01-26 23:05:44.0307054 +0000 UTC m=+1559.019476942" watchObservedRunningTime="2026-01-26 23:05:44.040065515 +0000 UTC m=+1559.028837027" Jan 26 23:05:46 crc kubenswrapper[4793]: I0126 23:05:46.575550 4793 scope.go:117] "RemoveContainer" containerID="89a414f6ad4eff334e7c77f7dc32f2bcaf583553d9176e51a8eb3503f48b90e9" Jan 26 23:05:46 crc kubenswrapper[4793]: I0126 23:05:46.897561 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:46 crc kubenswrapper[4793]: I0126 23:05:46.897611 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:46 crc kubenswrapper[4793]: I0126 23:05:46.935789 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:47 crc kubenswrapper[4793]: I0126 23:05:47.761473 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:05:47 crc kubenswrapper[4793]: E0126 23:05:47.762110 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:05:56 crc kubenswrapper[4793]: I0126 23:05:56.942755 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:56 crc kubenswrapper[4793]: I0126 23:05:56.987329 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:57 crc kubenswrapper[4793]: I0126 23:05:57.106903 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n62d9" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="registry-server" containerID="cri-o://896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b" gracePeriod=2 Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.051709 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.116809 4793 generic.go:334] "Generic (PLEG): container finished" podID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerID="896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b" exitCode=0 Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.116858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerDied","Data":"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b"} Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.116894 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n62d9" event={"ID":"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94","Type":"ContainerDied","Data":"b2c7be3b64f188ea94ff64c5424b363e760f760a871b1a440341ca934af4e468"} Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.116918 4793 scope.go:117] "RemoveContainer" containerID="896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.116916 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n62d9" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.128395 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvths\" (UniqueName: \"kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths\") pod \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.128474 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content\") pod \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.128593 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities\") pod \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\" (UID: \"3a8cdb3d-5f1f-4090-8b76-6c8ada204a94\") " Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.129350 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities" (OuterVolumeSpecName: "utilities") pod "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" (UID: "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.135860 4793 scope.go:117] "RemoveContainer" containerID="a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.153419 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths" (OuterVolumeSpecName: "kube-api-access-cvths") pod "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" (UID: "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94"). InnerVolumeSpecName "kube-api-access-cvths". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.184449 4793 scope.go:117] "RemoveContainer" containerID="2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.192748 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" (UID: "3a8cdb3d-5f1f-4090-8b76-6c8ada204a94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.216282 4793 scope.go:117] "RemoveContainer" containerID="896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b" Jan 26 23:05:58 crc kubenswrapper[4793]: E0126 23:05:58.216817 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b\": container with ID starting with 896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b not found: ID does not exist" containerID="896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.216877 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b"} err="failed to get container status \"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b\": rpc error: code = NotFound desc = could not find container \"896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b\": container with ID starting with 896bc2ff95472c985ba2c4ebe1aa3e88d47af2eaf5a801a8d9dc0a630b6c1a6b not found: ID does not exist" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.216910 4793 scope.go:117] "RemoveContainer" containerID="a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8" Jan 26 23:05:58 crc kubenswrapper[4793]: E0126 23:05:58.217322 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8\": container with ID starting with a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8 not found: ID does not exist" containerID="a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.217365 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8"} err="failed to get container status \"a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8\": rpc error: code = NotFound desc = could not find container \"a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8\": container with ID starting with a56588984210897bed2c1f1ae3aff17b72bea9133ff57646c525bd6f2e9b2ea8 not found: ID does not exist" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.217394 4793 scope.go:117] "RemoveContainer" containerID="2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842" Jan 26 23:05:58 crc kubenswrapper[4793]: E0126 23:05:58.217688 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842\": container with ID starting with 2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842 not found: ID does not exist" containerID="2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.217727 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842"} err="failed to get container status \"2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842\": rpc error: code = NotFound desc = could not find container \"2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842\": container with ID starting with 2d603d64df2df3f30f63e5ba818cc6ae39bc96d1de28d37932422c6576c2c842 not found: ID does not exist" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.230171 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.230230 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvths\" (UniqueName: \"kubernetes.io/projected/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-kube-api-access-cvths\") on node \"crc\" DevicePath \"\"" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.230243 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.453497 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:58 crc kubenswrapper[4793]: I0126 23:05:58.461458 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n62d9"] Jan 26 23:05:59 crc kubenswrapper[4793]: I0126 23:05:59.768957 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" path="/var/lib/kubelet/pods/3a8cdb3d-5f1f-4090-8b76-6c8ada204a94/volumes" Jan 26 23:06:02 crc kubenswrapper[4793]: I0126 23:06:02.761406 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:06:02 crc kubenswrapper[4793]: E0126 23:06:02.761940 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:06:06 crc kubenswrapper[4793]: I0126 23:06:06.782755 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-517e-account-create-update-z5vlt_a62f2482-4c97-42ed-92d7-1b0dc5319971/mariadb-account-create-update/0.log" Jan 26 23:06:07 crc kubenswrapper[4793]: I0126 23:06:07.317577 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-564b65fb54-5sx8h_2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348/keystone-api/0.log" Jan 26 23:06:07 crc kubenswrapper[4793]: I0126 23:06:07.890990 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-bootstrap-5hcwl_66c97183-25d5-4ca6-bede-619df68d1471/keystone-bootstrap/0.log" Jan 26 23:06:08 crc kubenswrapper[4793]: I0126 23:06:08.452377 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-cron-29491141-mgww5_86415a48-3c2d-430b-a38c-f43e1c518984/keystone-cron/0.log" Jan 26 23:06:08 crc kubenswrapper[4793]: I0126 23:06:08.971811 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-create-566zk_74d39f02-fd88-4b8f-8f71-0f4f1386ad9a/mariadb-database-create/0.log" Jan 26 23:06:09 crc kubenswrapper[4793]: I0126 23:06:09.488620 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-sync-zhxm7_288bfc86-c5a0-401c-bbda-ea48bbbd855c/keystone-db-sync/0.log" Jan 26 23:06:10 crc kubenswrapper[4793]: I0126 23:06:10.259313 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_d933c86b-bb42-4d94-9eb2-65888a5e95ab/memcached/0.log" Jan 26 23:06:10 crc kubenswrapper[4793]: I0126 23:06:10.833787 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_1fc3daf2-84e7-4831-9eae-155632f1b0cd/galera/0.log" Jan 26 23:06:11 crc kubenswrapper[4793]: I0126 23:06:11.407699 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_9cf954a6-b71a-49f9-93c5-618d4e944159/galera/0.log" Jan 26 23:06:11 crc kubenswrapper[4793]: I0126 23:06:11.962511 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_dfe7ef28-7eb1-48c8-b2bb-8990933e8971/openstackclient/0.log" Jan 26 23:06:12 crc kubenswrapper[4793]: I0126 23:06:12.488023 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-655b5d9d44-xt9hv_a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e/placement-log/0.log" Jan 26 23:06:12 crc kubenswrapper[4793]: I0126 23:06:12.931776 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-ca47-account-create-update-8c675_05214d2f-078c-43c5-bf4d-c8d80580ce8f/mariadb-account-create-update/0.log" Jan 26 23:06:13 crc kubenswrapper[4793]: I0126 23:06:13.323016 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-create-bz7cb_466a301d-4d91-4f49-831a-7d7f07ecd1bd/mariadb-database-create/0.log" Jan 26 23:06:13 crc kubenswrapper[4793]: I0126 23:06:13.752429 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-sync-rn9xt_26147a27-d0db-498e-bf1d-fb496d6a7b48/placement-db-sync/0.log" Jan 26 23:06:14 crc kubenswrapper[4793]: I0126 23:06:14.232988 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a/rabbitmq/0.log" Jan 26 23:06:14 crc kubenswrapper[4793]: I0126 23:06:14.774257 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_a8704cd7-a5e9-45ca-9886-cef2f797c7f1/rabbitmq/0.log" Jan 26 23:06:15 crc kubenswrapper[4793]: I0126 23:06:15.271369 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-notifications-server-0_70c43064-b9c2-4f01-9bc2-5431c5dca494/rabbitmq/0.log" Jan 26 23:06:15 crc kubenswrapper[4793]: I0126 23:06:15.741937 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_733142d4-49c6-4e25-a160-78aa1118d296/rabbitmq/0.log" Jan 26 23:06:16 crc kubenswrapper[4793]: I0126 23:06:16.265951 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_root-account-create-update-hrh5x_bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a/mariadb-account-create-update/0.log" Jan 26 23:06:16 crc kubenswrapper[4793]: I0126 23:06:16.762040 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:06:16 crc kubenswrapper[4793]: E0126 23:06:16.762993 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:06:28 crc kubenswrapper[4793]: I0126 23:06:28.761670 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:06:28 crc kubenswrapper[4793]: E0126 23:06:28.762438 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:06:39 crc kubenswrapper[4793]: I0126 23:06:39.761145 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:06:39 crc kubenswrapper[4793]: E0126 23:06:39.762816 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:06:46 crc kubenswrapper[4793]: I0126 23:06:46.664415 4793 scope.go:117] "RemoveContainer" containerID="fdea9b9e0baa0a798c51488874e92138a3f1af0ea746b2e505a1d80e6411576f" Jan 26 23:06:48 crc kubenswrapper[4793]: I0126 23:06:48.992287 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj_e023476b-0850-4e37-97da-0cd1a5d9425f/extract/0.log" Jan 26 23:06:49 crc kubenswrapper[4793]: I0126 23:06:49.433727 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf_fcc64fc2-d867-4a08-8df7-0464de3c133e/extract/0.log" Jan 26 23:06:49 crc kubenswrapper[4793]: I0126 23:06:49.862231 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6987f66698-542qx_71c7d600-2dac-4e19-a33f-0311a8342774/manager/0.log" Jan 26 23:06:50 crc kubenswrapper[4793]: I0126 23:06:50.262664 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-dgk2z_58fa089e-6ccd-4521-88ff-e65e6928b738/manager/0.log" Jan 26 23:06:50 crc kubenswrapper[4793]: I0126 23:06:50.705159 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-vnt9q_4190249d-f34c-4b1a-a4da-038f7a806fc6/manager/0.log" Jan 26 23:06:51 crc kubenswrapper[4793]: I0126 23:06:51.119821 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-cpwtx_f15117fc-93f9-498b-b831-e87094aa991e/manager/0.log" Jan 26 23:06:51 crc kubenswrapper[4793]: I0126 23:06:51.523693 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-954b94f75-fnjqt_18b41715-6c23-4821-b0da-1c17b5010375/manager/0.log" Jan 26 23:06:51 crc kubenswrapper[4793]: I0126 23:06:51.761530 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:06:51 crc kubenswrapper[4793]: E0126 23:06:51.761847 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:06:51 crc kubenswrapper[4793]: I0126 23:06:51.909720 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-vkwjg_42a387a4-faad-41fe-bfa9-0f600a06e6e0/manager/0.log" Jan 26 23:06:52 crc kubenswrapper[4793]: I0126 23:06:52.405051 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-8zm8h_eb8ed64e-38aa-4e1f-be80-29be415125fd/manager/0.log" Jan 26 23:06:52 crc kubenswrapper[4793]: I0126 23:06:52.821805 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-7twhn_bcf4192a-ff9b-4b02-83d0-a17ecd3ba795/manager/0.log" Jan 26 23:06:53 crc kubenswrapper[4793]: I0126 23:06:53.276856 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-g2r87_22d5cae5-26fb-4a47-97ac-78ba7120d29c/manager/0.log" Jan 26 23:06:53 crc kubenswrapper[4793]: I0126 23:06:53.691757 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-flkdn_c6fcff5b-7bdb-4607-bd4d-389c863ddba6/manager/0.log" Jan 26 23:06:54 crc kubenswrapper[4793]: I0126 23:06:54.144959 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf_da1ca3d4-2bae-4133-88c8-b67f74ebc6ab/manager/0.log" Jan 26 23:06:54 crc kubenswrapper[4793]: I0126 23:06:54.558886 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-vxk8h_b934b85e-14a0-4ad2-bdc0-82280eb346a9/manager/0.log" Jan 26 23:06:54 crc kubenswrapper[4793]: I0126 23:06:54.993051 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5b4fc7b894-6xkrk_209b817f-0bec-4d6b-814c-ae2a07913a56/manager/0.log" Jan 26 23:06:55 crc kubenswrapper[4793]: I0126 23:06:55.473923 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-864fr_c0a1696e-8859-4708-84f1-57b65d7cc16a/registry-server/0.log" Jan 26 23:06:55 crc kubenswrapper[4793]: I0126 23:06:55.900065 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-756f86fc74-mvr7p_4f086064-ee5a-47cf-bf97-fc2a423d8c33/manager/0.log" Jan 26 23:06:56 crc kubenswrapper[4793]: I0126 23:06:56.300327 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4_86ca275a-4c50-479e-9ed5-4e03dda309cf/manager/0.log" Jan 26 23:06:56 crc kubenswrapper[4793]: I0126 23:06:56.914331 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5c5bb8bdbb-6lmgq_b1cd2fec-ba5b-4984-90c5-565df4ef5cd1/manager/0.log" Jan 26 23:06:57 crc kubenswrapper[4793]: I0126 23:06:57.324588 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j9sn2_b2f6b1be-cc0d-47e0-8375-704e63bf5a2b/registry-server/0.log" Jan 26 23:06:57 crc kubenswrapper[4793]: I0126 23:06:57.730029 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-2jn8d_6e0878fd-53dc-4048-9267-2520c5919067/manager/0.log" Jan 26 23:06:58 crc kubenswrapper[4793]: I0126 23:06:58.151140 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-bp8k8_7968980a-764c-4cd2-b77f-ed33fffbd294/manager/0.log" Jan 26 23:06:58 crc kubenswrapper[4793]: I0126 23:06:58.581564 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vzmsv_d4c8a5bf-4e12-46b4-9d2a-1a098416e90a/operator/0.log" Jan 26 23:06:58 crc kubenswrapper[4793]: I0126 23:06:58.979952 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-rrmcg_06401d29-ca17-43df-865a-26523cf84f67/manager/0.log" Jan 26 23:06:59 crc kubenswrapper[4793]: I0126 23:06:59.391784 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-j292m_65c6d17b-b112-421c-a686-ce1601f91181/manager/0.log" Jan 26 23:06:59 crc kubenswrapper[4793]: I0126 23:06:59.781414 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-sgcrb_6e38b64f-cd6a-4cac-a125-be388bb0dc78/manager/0.log" Jan 26 23:07:00 crc kubenswrapper[4793]: I0126 23:07:00.186540 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-cb484894b-qxf7r_d0e46999-98e6-40a6-99a2-f4c01dad0f81/manager/0.log" Jan 26 23:07:04 crc kubenswrapper[4793]: I0126 23:07:04.761501 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:07:04 crc kubenswrapper[4793]: E0126 23:07:04.762277 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:07:04 crc kubenswrapper[4793]: I0126 23:07:04.919438 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-517e-account-create-update-z5vlt_a62f2482-4c97-42ed-92d7-1b0dc5319971/mariadb-account-create-update/0.log" Jan 26 23:07:05 crc kubenswrapper[4793]: I0126 23:07:05.402345 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-564b65fb54-5sx8h_2dfd7a46-fa72-4bfe-9f03-7ac1c0fed348/keystone-api/0.log" Jan 26 23:07:05 crc kubenswrapper[4793]: I0126 23:07:05.952075 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-bootstrap-5hcwl_66c97183-25d5-4ca6-bede-619df68d1471/keystone-bootstrap/0.log" Jan 26 23:07:06 crc kubenswrapper[4793]: I0126 23:07:06.522141 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-cron-29491141-mgww5_86415a48-3c2d-430b-a38c-f43e1c518984/keystone-cron/0.log" Jan 26 23:07:07 crc kubenswrapper[4793]: I0126 23:07:07.068195 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-create-566zk_74d39f02-fd88-4b8f-8f71-0f4f1386ad9a/mariadb-database-create/0.log" Jan 26 23:07:07 crc kubenswrapper[4793]: I0126 23:07:07.567774 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-sync-zhxm7_288bfc86-c5a0-401c-bbda-ea48bbbd855c/keystone-db-sync/0.log" Jan 26 23:07:08 crc kubenswrapper[4793]: I0126 23:07:08.271768 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_d933c86b-bb42-4d94-9eb2-65888a5e95ab/memcached/0.log" Jan 26 23:07:08 crc kubenswrapper[4793]: I0126 23:07:08.823700 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_1fc3daf2-84e7-4831-9eae-155632f1b0cd/galera/0.log" Jan 26 23:07:09 crc kubenswrapper[4793]: I0126 23:07:09.347349 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_9cf954a6-b71a-49f9-93c5-618d4e944159/galera/0.log" Jan 26 23:07:09 crc kubenswrapper[4793]: I0126 23:07:09.837725 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_dfe7ef28-7eb1-48c8-b2bb-8990933e8971/openstackclient/0.log" Jan 26 23:07:10 crc kubenswrapper[4793]: I0126 23:07:10.349938 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-655b5d9d44-xt9hv_a7aa4144-bcdd-4fb8-84f4-fb976c8c4a0e/placement-log/0.log" Jan 26 23:07:10 crc kubenswrapper[4793]: I0126 23:07:10.819246 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-ca47-account-create-update-8c675_05214d2f-078c-43c5-bf4d-c8d80580ce8f/mariadb-account-create-update/0.log" Jan 26 23:07:11 crc kubenswrapper[4793]: I0126 23:07:11.206444 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-create-bz7cb_466a301d-4d91-4f49-831a-7d7f07ecd1bd/mariadb-database-create/0.log" Jan 26 23:07:11 crc kubenswrapper[4793]: I0126 23:07:11.621521 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-sync-rn9xt_26147a27-d0db-498e-bf1d-fb496d6a7b48/placement-db-sync/0.log" Jan 26 23:07:12 crc kubenswrapper[4793]: I0126 23:07:12.085580 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_4f8cb9b2-1ac7-4fcd-b5f9-a55b46055b3a/rabbitmq/0.log" Jan 26 23:07:12 crc kubenswrapper[4793]: I0126 23:07:12.502686 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_a8704cd7-a5e9-45ca-9886-cef2f797c7f1/rabbitmq/0.log" Jan 26 23:07:12 crc kubenswrapper[4793]: I0126 23:07:12.943681 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-notifications-server-0_70c43064-b9c2-4f01-9bc2-5431c5dca494/rabbitmq/0.log" Jan 26 23:07:13 crc kubenswrapper[4793]: I0126 23:07:13.360326 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_733142d4-49c6-4e25-a160-78aa1118d296/rabbitmq/0.log" Jan 26 23:07:13 crc kubenswrapper[4793]: I0126 23:07:13.793554 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_root-account-create-update-hrh5x_bc214de7-26ce-4bb6-a66b-cfb8ad7fa48a/mariadb-account-create-update/0.log" Jan 26 23:07:16 crc kubenswrapper[4793]: I0126 23:07:16.761501 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:07:16 crc kubenswrapper[4793]: E0126 23:07:16.762159 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.340560 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:17 crc kubenswrapper[4793]: E0126 23:07:17.340854 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="extract-utilities" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.340869 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="extract-utilities" Jan 26 23:07:17 crc kubenswrapper[4793]: E0126 23:07:17.340885 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="registry-server" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.340891 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="registry-server" Jan 26 23:07:17 crc kubenswrapper[4793]: E0126 23:07:17.340917 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="extract-content" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.340923 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="extract-content" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.341059 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8cdb3d-5f1f-4090-8b76-6c8ada204a94" containerName="registry-server" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.342562 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.360481 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.421155 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.421496 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.421636 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzw7\" (UniqueName: \"kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.523130 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.523545 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.523666 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzw7\" (UniqueName: \"kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.524072 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.524339 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.550662 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzw7\" (UniqueName: \"kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7\") pod \"redhat-marketplace-zvsz6\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:17 crc kubenswrapper[4793]: I0126 23:07:17.663886 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:18 crc kubenswrapper[4793]: I0126 23:07:18.063683 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:18 crc kubenswrapper[4793]: I0126 23:07:18.714474 4793 generic.go:334] "Generic (PLEG): container finished" podID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerID="06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094" exitCode=0 Jan 26 23:07:18 crc kubenswrapper[4793]: I0126 23:07:18.714519 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerDied","Data":"06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094"} Jan 26 23:07:18 crc kubenswrapper[4793]: I0126 23:07:18.714544 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerStarted","Data":"461ed08c8856de812b1bf4c2e17198e800788dacf0634ecdba5b2a28f3da782c"} Jan 26 23:07:20 crc kubenswrapper[4793]: I0126 23:07:20.732440 4793 generic.go:334] "Generic (PLEG): container finished" podID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerID="7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209" exitCode=0 Jan 26 23:07:20 crc kubenswrapper[4793]: I0126 23:07:20.732591 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerDied","Data":"7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209"} Jan 26 23:07:21 crc kubenswrapper[4793]: I0126 23:07:21.740296 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerStarted","Data":"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e"} Jan 26 23:07:21 crc kubenswrapper[4793]: I0126 23:07:21.769147 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zvsz6" podStartSLOduration=2.336948925 podStartE2EDuration="4.769127112s" podCreationTimestamp="2026-01-26 23:07:17 +0000 UTC" firstStartedPulling="2026-01-26 23:07:18.717080148 +0000 UTC m=+1653.705851660" lastFinishedPulling="2026-01-26 23:07:21.149258335 +0000 UTC m=+1656.138029847" observedRunningTime="2026-01-26 23:07:21.761364061 +0000 UTC m=+1656.750135583" watchObservedRunningTime="2026-01-26 23:07:21.769127112 +0000 UTC m=+1656.757898624" Jan 26 23:07:27 crc kubenswrapper[4793]: I0126 23:07:27.665235 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:27 crc kubenswrapper[4793]: I0126 23:07:27.665623 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:27 crc kubenswrapper[4793]: I0126 23:07:27.721419 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:27 crc kubenswrapper[4793]: I0126 23:07:27.837021 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:27 crc kubenswrapper[4793]: I0126 23:07:27.953543 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:29 crc kubenswrapper[4793]: I0126 23:07:29.802779 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zvsz6" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="registry-server" containerID="cri-o://3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e" gracePeriod=2 Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.789250 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.832783 4793 generic.go:334] "Generic (PLEG): container finished" podID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerID="3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e" exitCode=0 Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.832826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerDied","Data":"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e"} Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.832853 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvsz6" event={"ID":"3db0b8a0-1455-4405-8eba-9dfb8223b6e1","Type":"ContainerDied","Data":"461ed08c8856de812b1bf4c2e17198e800788dacf0634ecdba5b2a28f3da782c"} Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.832873 4793 scope.go:117] "RemoveContainer" containerID="3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.832928 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvsz6" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.850425 4793 scope.go:117] "RemoveContainer" containerID="7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.874102 4793 scope.go:117] "RemoveContainer" containerID="06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.897986 4793 scope.go:117] "RemoveContainer" containerID="3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e" Jan 26 23:07:30 crc kubenswrapper[4793]: E0126 23:07:30.898420 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e\": container with ID starting with 3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e not found: ID does not exist" containerID="3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.898461 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e"} err="failed to get container status \"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e\": rpc error: code = NotFound desc = could not find container \"3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e\": container with ID starting with 3faa362fb226b6c490b4d677bf804368e130db79694d47d172df83411a6b397e not found: ID does not exist" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.898486 4793 scope.go:117] "RemoveContainer" containerID="7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209" Jan 26 23:07:30 crc kubenswrapper[4793]: E0126 23:07:30.898720 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209\": container with ID starting with 7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209 not found: ID does not exist" containerID="7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.898747 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209"} err="failed to get container status \"7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209\": rpc error: code = NotFound desc = could not find container \"7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209\": container with ID starting with 7cb617c5f495052b182a5b151b81c2ec9fc4dcdb873bc775233217eec17a0209 not found: ID does not exist" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.898766 4793 scope.go:117] "RemoveContainer" containerID="06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094" Jan 26 23:07:30 crc kubenswrapper[4793]: E0126 23:07:30.898999 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094\": container with ID starting with 06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094 not found: ID does not exist" containerID="06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.899020 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094"} err="failed to get container status \"06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094\": rpc error: code = NotFound desc = could not find container \"06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094\": container with ID starting with 06b32dc9ab9cf6925b8f5479a414d966a4475370e934b1b164f05e6616fed094 not found: ID does not exist" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.933928 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkzw7\" (UniqueName: \"kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7\") pod \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.933984 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content\") pod \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.934212 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities\") pod \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\" (UID: \"3db0b8a0-1455-4405-8eba-9dfb8223b6e1\") " Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.935055 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities" (OuterVolumeSpecName: "utilities") pod "3db0b8a0-1455-4405-8eba-9dfb8223b6e1" (UID: "3db0b8a0-1455-4405-8eba-9dfb8223b6e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.939678 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7" (OuterVolumeSpecName: "kube-api-access-xkzw7") pod "3db0b8a0-1455-4405-8eba-9dfb8223b6e1" (UID: "3db0b8a0-1455-4405-8eba-9dfb8223b6e1"). InnerVolumeSpecName "kube-api-access-xkzw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:07:30 crc kubenswrapper[4793]: I0126 23:07:30.959022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3db0b8a0-1455-4405-8eba-9dfb8223b6e1" (UID: "3db0b8a0-1455-4405-8eba-9dfb8223b6e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.037141 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.037261 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.037273 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkzw7\" (UniqueName: \"kubernetes.io/projected/3db0b8a0-1455-4405-8eba-9dfb8223b6e1-kube-api-access-xkzw7\") on node \"crc\" DevicePath \"\"" Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.161626 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.168119 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvsz6"] Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.760953 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:07:31 crc kubenswrapper[4793]: E0126 23:07:31.761573 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:07:31 crc kubenswrapper[4793]: I0126 23:07:31.770089 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" path="/var/lib/kubelet/pods/3db0b8a0-1455-4405-8eba-9dfb8223b6e1/volumes" Jan 26 23:07:43 crc kubenswrapper[4793]: I0126 23:07:43.762063 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:07:43 crc kubenswrapper[4793]: E0126 23:07:43.763555 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:07:44 crc kubenswrapper[4793]: I0126 23:07:44.315645 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_03d08141e710e875f178ce0845cda404e13a8b5f5ad6ab147e14daf5385ghcj_e023476b-0850-4e37-97da-0cd1a5d9425f/extract/0.log" Jan 26 23:07:44 crc kubenswrapper[4793]: I0126 23:07:44.708896 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9e80ad42fbbe9c59ab0d6cad41be440c63c9666d2d9d98508dd91f734048hlf_fcc64fc2-d867-4a08-8df7-0464de3c133e/extract/0.log" Jan 26 23:07:45 crc kubenswrapper[4793]: I0126 23:07:45.082846 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6987f66698-542qx_71c7d600-2dac-4e19-a33f-0311a8342774/manager/0.log" Jan 26 23:07:45 crc kubenswrapper[4793]: I0126 23:07:45.461878 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-dgk2z_58fa089e-6ccd-4521-88ff-e65e6928b738/manager/0.log" Jan 26 23:07:45 crc kubenswrapper[4793]: I0126 23:07:45.846206 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-vnt9q_4190249d-f34c-4b1a-a4da-038f7a806fc6/manager/0.log" Jan 26 23:07:46 crc kubenswrapper[4793]: I0126 23:07:46.274622 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-cpwtx_f15117fc-93f9-498b-b831-e87094aa991e/manager/0.log" Jan 26 23:07:46 crc kubenswrapper[4793]: I0126 23:07:46.708076 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-954b94f75-fnjqt_18b41715-6c23-4821-b0da-1c17b5010375/manager/0.log" Jan 26 23:07:47 crc kubenswrapper[4793]: I0126 23:07:47.105507 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-vkwjg_42a387a4-faad-41fe-bfa9-0f600a06e6e0/manager/0.log" Jan 26 23:07:47 crc kubenswrapper[4793]: I0126 23:07:47.570975 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-8zm8h_eb8ed64e-38aa-4e1f-be80-29be415125fd/manager/0.log" Jan 26 23:07:47 crc kubenswrapper[4793]: I0126 23:07:47.978548 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-7twhn_bcf4192a-ff9b-4b02-83d0-a17ecd3ba795/manager/0.log" Jan 26 23:07:48 crc kubenswrapper[4793]: I0126 23:07:48.433859 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-g2r87_22d5cae5-26fb-4a47-97ac-78ba7120d29c/manager/0.log" Jan 26 23:07:48 crc kubenswrapper[4793]: I0126 23:07:48.849959 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-flkdn_c6fcff5b-7bdb-4607-bd4d-389c863ddba6/manager/0.log" Jan 26 23:07:49 crc kubenswrapper[4793]: I0126 23:07:49.246952 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-gqnqf_da1ca3d4-2bae-4133-88c8-b67f74ebc6ab/manager/0.log" Jan 26 23:07:49 crc kubenswrapper[4793]: I0126 23:07:49.681366 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-vxk8h_b934b85e-14a0-4ad2-bdc0-82280eb346a9/manager/0.log" Jan 26 23:07:50 crc kubenswrapper[4793]: I0126 23:07:50.107872 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5b4fc7b894-6xkrk_209b817f-0bec-4d6b-814c-ae2a07913a56/manager/0.log" Jan 26 23:07:50 crc kubenswrapper[4793]: I0126 23:07:50.584709 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-864fr_c0a1696e-8859-4708-84f1-57b65d7cc16a/registry-server/0.log" Jan 26 23:07:51 crc kubenswrapper[4793]: I0126 23:07:51.001828 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-756f86fc74-mvr7p_4f086064-ee5a-47cf-bf97-fc2a423d8c33/manager/0.log" Jan 26 23:07:51 crc kubenswrapper[4793]: I0126 23:07:51.435399 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854sq4l4_86ca275a-4c50-479e-9ed5-4e03dda309cf/manager/0.log" Jan 26 23:07:52 crc kubenswrapper[4793]: I0126 23:07:52.036140 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5c5bb8bdbb-6lmgq_b1cd2fec-ba5b-4984-90c5-565df4ef5cd1/manager/0.log" Jan 26 23:07:52 crc kubenswrapper[4793]: I0126 23:07:52.450060 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j9sn2_b2f6b1be-cc0d-47e0-8375-704e63bf5a2b/registry-server/0.log" Jan 26 23:07:52 crc kubenswrapper[4793]: I0126 23:07:52.878377 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-2jn8d_6e0878fd-53dc-4048-9267-2520c5919067/manager/0.log" Jan 26 23:07:53 crc kubenswrapper[4793]: I0126 23:07:53.305701 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-bp8k8_7968980a-764c-4cd2-b77f-ed33fffbd294/manager/0.log" Jan 26 23:07:53 crc kubenswrapper[4793]: I0126 23:07:53.745911 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vzmsv_d4c8a5bf-4e12-46b4-9d2a-1a098416e90a/operator/0.log" Jan 26 23:07:54 crc kubenswrapper[4793]: I0126 23:07:54.148441 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-rrmcg_06401d29-ca17-43df-865a-26523cf84f67/manager/0.log" Jan 26 23:07:54 crc kubenswrapper[4793]: I0126 23:07:54.552643 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-j292m_65c6d17b-b112-421c-a686-ce1601f91181/manager/0.log" Jan 26 23:07:54 crc kubenswrapper[4793]: I0126 23:07:54.980424 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-sgcrb_6e38b64f-cd6a-4cac-a125-be388bb0dc78/manager/0.log" Jan 26 23:07:55 crc kubenswrapper[4793]: I0126 23:07:55.378722 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-cb484894b-qxf7r_d0e46999-98e6-40a6-99a2-f4c01dad0f81/manager/0.log" Jan 26 23:07:57 crc kubenswrapper[4793]: I0126 23:07:57.761421 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:07:57 crc kubenswrapper[4793]: E0126 23:07:57.762401 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:08:10 crc kubenswrapper[4793]: I0126 23:08:10.760712 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:08:10 crc kubenswrapper[4793]: E0126 23:08:10.761598 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.305558 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nnzkd/must-gather-j7rpq"] Jan 26 23:08:17 crc kubenswrapper[4793]: E0126 23:08:17.306684 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="registry-server" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.306700 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="registry-server" Jan 26 23:08:17 crc kubenswrapper[4793]: E0126 23:08:17.306727 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="extract-content" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.306733 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="extract-content" Jan 26 23:08:17 crc kubenswrapper[4793]: E0126 23:08:17.306748 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="extract-utilities" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.306755 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="extract-utilities" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.306900 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db0b8a0-1455-4405-8eba-9dfb8223b6e1" containerName="registry-server" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.307723 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.310880 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-nnzkd"/"default-dockercfg-j2fv4" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.311005 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nnzkd"/"kube-root-ca.crt" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.313060 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nnzkd"/"openshift-service-ca.crt" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.330423 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nnzkd/must-gather-j7rpq"] Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.359540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hl6k\" (UniqueName: \"kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.359603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.460972 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.461123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hl6k\" (UniqueName: \"kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.461869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.483409 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hl6k\" (UniqueName: \"kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k\") pod \"must-gather-j7rpq\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:17 crc kubenswrapper[4793]: I0126 23:08:17.629352 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:18 crc kubenswrapper[4793]: I0126 23:08:18.080171 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nnzkd/must-gather-j7rpq"] Jan 26 23:08:18 crc kubenswrapper[4793]: W0126 23:08:18.097487 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1022863d_ebfc_49a0_85f1_a89a8f6fb03c.slice/crio-d3d23826dc94d8c352596a1325613d1a57619df764981da030021f190f40a398 WatchSource:0}: Error finding container d3d23826dc94d8c352596a1325613d1a57619df764981da030021f190f40a398: Status 404 returned error can't find the container with id d3d23826dc94d8c352596a1325613d1a57619df764981da030021f190f40a398 Jan 26 23:08:18 crc kubenswrapper[4793]: I0126 23:08:18.099994 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:08:18 crc kubenswrapper[4793]: I0126 23:08:18.165251 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" event={"ID":"1022863d-ebfc-49a0-85f1-a89a8f6fb03c","Type":"ContainerStarted","Data":"d3d23826dc94d8c352596a1325613d1a57619df764981da030021f190f40a398"} Jan 26 23:08:21 crc kubenswrapper[4793]: I0126 23:08:21.761441 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:08:21 crc kubenswrapper[4793]: E0126 23:08:21.762081 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:08:32 crc kubenswrapper[4793]: E0126 23:08:32.483155 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Jan 26 23:08:32 crc kubenswrapper[4793]: E0126 23:08:32.484013 4793 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 23:08:32 crc kubenswrapper[4793]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Jan 26 23:08:32 crc kubenswrapper[4793]: HAVE_SESSION_TOOLS=true Jan 26 23:08:32 crc kubenswrapper[4793]: else Jan 26 23:08:32 crc kubenswrapper[4793]: HAVE_SESSION_TOOLS=false Jan 26 23:08:32 crc kubenswrapper[4793]: fi Jan 26 23:08:32 crc kubenswrapper[4793]: Jan 26 23:08:32 crc kubenswrapper[4793]: Jan 26 23:08:32 crc kubenswrapper[4793]: echo "[disk usage checker] Started" Jan 26 23:08:32 crc kubenswrapper[4793]: target_dir="/must-gather" Jan 26 23:08:32 crc kubenswrapper[4793]: usage_percentage_limit="80" Jan 26 23:08:32 crc kubenswrapper[4793]: while true; do Jan 26 23:08:32 crc kubenswrapper[4793]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Jan 26 23:08:32 crc kubenswrapper[4793]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Jan 26 23:08:32 crc kubenswrapper[4793]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Jan 26 23:08:32 crc kubenswrapper[4793]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Jan 26 23:08:32 crc kubenswrapper[4793]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 26 23:08:32 crc kubenswrapper[4793]: ps -o sess --no-headers | sort -u | while read sid; do Jan 26 23:08:32 crc kubenswrapper[4793]: [[ "$sid" -eq "${$}" ]] && continue Jan 26 23:08:32 crc kubenswrapper[4793]: pkill --signal SIGKILL --session "$sid" Jan 26 23:08:32 crc kubenswrapper[4793]: done Jan 26 23:08:32 crc kubenswrapper[4793]: else Jan 26 23:08:32 crc kubenswrapper[4793]: kill 0 Jan 26 23:08:32 crc kubenswrapper[4793]: fi Jan 26 23:08:32 crc kubenswrapper[4793]: exit 1 Jan 26 23:08:32 crc kubenswrapper[4793]: fi Jan 26 23:08:32 crc kubenswrapper[4793]: sleep 5 Jan 26 23:08:32 crc kubenswrapper[4793]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 26 23:08:32 crc kubenswrapper[4793]: setsid -w bash <<-MUSTGATHER_EOF Jan 26 23:08:32 crc kubenswrapper[4793]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Jan 26 23:08:32 crc kubenswrapper[4793]: MUSTGATHER_EOF Jan 26 23:08:32 crc kubenswrapper[4793]: else Jan 26 23:08:32 crc kubenswrapper[4793]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Jan 26 23:08:32 crc kubenswrapper[4793]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hl6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-j7rpq_openshift-must-gather-nnzkd(1022863d-ebfc-49a0-85f1-a89a8f6fb03c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 26 23:08:32 crc kubenswrapper[4793]: > logger="UnhandledError" Jan 26 23:08:32 crc kubenswrapper[4793]: E0126 23:08:32.486369 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" podUID="1022863d-ebfc-49a0-85f1-a89a8f6fb03c" Jan 26 23:08:32 crc kubenswrapper[4793]: I0126 23:08:32.760870 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:08:32 crc kubenswrapper[4793]: E0126 23:08:32.761107 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:08:33 crc kubenswrapper[4793]: E0126 23:08:33.275967 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" podUID="1022863d-ebfc-49a0-85f1-a89a8f6fb03c" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.294813 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nnzkd/must-gather-j7rpq"] Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.301976 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nnzkd/must-gather-j7rpq"] Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.625336 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.761066 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:08:44 crc kubenswrapper[4793]: E0126 23:08:44.761409 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.805854 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output\") pod \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.806163 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1022863d-ebfc-49a0-85f1-a89a8f6fb03c" (UID: "1022863d-ebfc-49a0-85f1-a89a8f6fb03c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.806360 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hl6k\" (UniqueName: \"kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k\") pod \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\" (UID: \"1022863d-ebfc-49a0-85f1-a89a8f6fb03c\") " Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.806664 4793 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.815477 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k" (OuterVolumeSpecName: "kube-api-access-8hl6k") pod "1022863d-ebfc-49a0-85f1-a89a8f6fb03c" (UID: "1022863d-ebfc-49a0-85f1-a89a8f6fb03c"). InnerVolumeSpecName "kube-api-access-8hl6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:44 crc kubenswrapper[4793]: I0126 23:08:44.908952 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hl6k\" (UniqueName: \"kubernetes.io/projected/1022863d-ebfc-49a0-85f1-a89a8f6fb03c-kube-api-access-8hl6k\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:45 crc kubenswrapper[4793]: I0126 23:08:45.386906 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nnzkd/must-gather-j7rpq" Jan 26 23:08:45 crc kubenswrapper[4793]: I0126 23:08:45.772018 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1022863d-ebfc-49a0-85f1-a89a8f6fb03c" path="/var/lib/kubelet/pods/1022863d-ebfc-49a0-85f1-a89a8f6fb03c/volumes" Jan 26 23:08:56 crc kubenswrapper[4793]: I0126 23:08:56.761241 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:08:56 crc kubenswrapper[4793]: E0126 23:08:56.762051 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:09:08 crc kubenswrapper[4793]: I0126 23:09:08.760308 4793 scope.go:117] "RemoveContainer" containerID="b88b07e3ba2cdfa2e7f40d97454de67d9dd0d11fdf68f43d89f41dcee202e497" Jan 26 23:09:08 crc kubenswrapper[4793]: E0126 23:09:08.761099 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5htjl_openshift-machine-config-operator(22a78b43-c8a5-48e0-8fe3-89bc7b449391)\"" pod="openshift-machine-config-operator/machine-config-daemon-5htjl" podUID="22a78b43-c8a5-48e0-8fe3-89bc7b449391" Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.065754 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-create-566zk"] Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.074067 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-517e-account-create-update-z5vlt"] Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.080998 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-create-566zk"] Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.087316 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-create-bz7cb"] Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.094681 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-517e-account-create-update-z5vlt"] Jan 26 23:09:10 crc kubenswrapper[4793]: I0126 23:09:10.100629 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-create-bz7cb"] Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.028781 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-ca47-account-create-update-8c675"] Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.038326 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-ca47-account-create-update-8c675"] Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.771670 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05214d2f-078c-43c5-bf4d-c8d80580ce8f" path="/var/lib/kubelet/pods/05214d2f-078c-43c5-bf4d-c8d80580ce8f/volumes" Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.772454 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466a301d-4d91-4f49-831a-7d7f07ecd1bd" path="/var/lib/kubelet/pods/466a301d-4d91-4f49-831a-7d7f07ecd1bd/volumes" Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.773057 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d39f02-fd88-4b8f-8f71-0f4f1386ad9a" path="/var/lib/kubelet/pods/74d39f02-fd88-4b8f-8f71-0f4f1386ad9a/volumes" Jan 26 23:09:11 crc kubenswrapper[4793]: I0126 23:09:11.773623 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62f2482-4c97-42ed-92d7-1b0dc5319971" path="/var/lib/kubelet/pods/a62f2482-4c97-42ed-92d7-1b0dc5319971/volumes"